FKA Glossary of Industry Terms
In FKA’s Instructional Systems Design Methodology, an ability is a self-contained unit of work or expertise. It is the second level of breakdown of the Model of Performance. The ability statement is expressed as an action verb followed by a noun phrase, e.g., “Generate monthly sales reports.” Characteristics and parameters complete the description of the ability.
A verb that identifies a precise behavior, e.g., describe, assemble, compare. Instructional objectives should always contain an action verb to ensure that they will be specific and measurable. Verbs to be avoided in objectives are “know”, “learn” and “understand” because they can’t be measured. Different action verbs are applicable to the six cognitive levels in Bloom’s Taxonomy.
Giving undivided attention to a speaker in a genuine effort to understand the speaker’s point of view. It usually includes the use of non-verbal cues such as nodding, eye contact and alert posture; and the use of verbal encouragers such as “Yes”, “Aha” and “Mmm”. Includes the skills of Clarifying and Confirming.
This is the final stage in Bruce Tuckman’s model of group development: Forming, Storming, Norming, Performing and Adjourning. This stage was added later. Adjourning involves completing the task and breaking up the team. This stage is sometimes referred to as Mourning. See also Forming, Storming, Norming and Performing.
ADULT LEARNING PRINCIPLES
The action and conditions that support, enhance and promote learning in adults. Respecting the principles during the design, development and delivery of learning programs will significantly increase learners’ success.
The area of brain function that controls feelings, attitudes, and values. These are not easily measured. See also Bloom’s Taxonomy.
AGILE INSTRUCTIONAL DESIGN
An acronym for an approach to Instructional Systems Design that was first expressed by Conrad Gottfredson. The concept was adapted from the AGILE approach to computer systems design. The acronym stands for Align, Get set, Iterate & implement, Leverage, and Evaluate. The philosophy of AGILE is to rapidly implement small chunks of learning, repeating the cycle until a complete program is provided.
(1) See Test Alignment.
(2) In performance consulting, alignment measures the degree to which the learning needs and the performance and business needs are the same.
Two or more versions of a test that are considered interchangeable, in that they: measure the same constructs in the same ways, are intended for the same purposes, and are administered using the same directions.
ALTERNATE FORM RELIABILITY
A measure of reliability, in which alternate forms of the same test are administered to the same subjects on separate occasions. The alternate forms are compared for consistency.
The second phase in FKA’s Instructional Systems Design Methodology. During this phase the target learning population is analyzed, as well as the required job performance. The two deliverables from this phase are the Population Profile and the Model of Performance (MoP). See also Population Analysis, Population Profile, Performance Analysis and Model of Performance.
Malcolm Knowles’ theory of how adults learn as opposed to children (pedagogy). He emphasized that adults are self-directed and expect to take responsibility for decisions. Adult learning programs must accommodate this fundamental aspect. Andragogy makes the following assumptions about the design of learning: (1) Adults need to know why they need to learn something (2) Adults need to learn experientially, (3) Adults approach learning as problem-solving, and (4) Adults learn best when the topic is of immediate value. In practical terms, andragogy means that instruction for adults needs to focus more on them practicing and less on the content being taught. Strategies such as case studies, role playing, simulations, and self-evaluation are most useful. Instructors adopt a role of facilitator or resource rather than lecturer. Knowles is considered the father of learner-centered environments.
A rapid sequential presentation of slightly differing graphics that creates the illusion of motion. Animation can illustrate much more than static images but requires more computer processing.
(1) The middle component of FKA’s Systematic Learning Process—Presentation, Application and Feedback (PAF). The learner applies, uses or practices the new skills and knowledge just presented.
(2) The planned activity carried out during the Application stage that provides an opportunity for the learner to practice the new skills and knowledge just presented.
Those methods employed during Application to allow the learner to apply, use or practice what has just been presented, e.g., case study, game, role play. Sometimes called a practice method.
The means by which an application can be used simultaneously by more than one user. One person starts up the application on his/her computer and shares it with other users at other computers. This is useful in facilitated e-learning to demonstrate the use of an application or to provide supervised individual or group practice.
A question that asks about the use of content. For example:
• What is the value of the idea/concept on the job?
• How could you use the idea/concept on the job?
• In what situation(s) would the idea/concept be useful?
The ability of a person to acquire new skills and knowledge given opportunities to learn and practice.
Essentially a measurement process of the learning that has either taken place or can take place. It is usually measured against stated learning outcomes:
• Predictive assessment attempts to measure what the learner might achieve given a suitable learning program.
• Attainment assessment attempts to measure what the learner knows or can do at the time.
See also Formative Assessment and Summative Evaluation (or Assessment).
A method of communicating where those taking part are NOT connected in real time. Examples in the workplace are e-mail, discussion boards and voicemail. In online learning, an event in which people are not logged on at the same time. For example, the instructor/facilitator might publish information on a Website and learners would read it later.
All of the verbal and non-verbal behaviors used to create rapport and facilitate communication between two people. The listener must apply himself/herself and be fully present in the conversation at hand. It is often used in counseling or coaching situations.
(1) A persistent feeling that influences a person to act positively or negatively toward an idea, object, person or situation. It is closely linked to personal opinions and beliefs. Known as the affective domain in psychology.
(2) One of eight performance factors; a negative attitude can result in poor performance.
Identifies human factors that affect work performance and/or the ability to learn. Also called Population Analysis.
The medium delivering sound.
In computer networks, bandwidth is used as a synonym for data transfer rate, the amount of data that can be carried from one point to another in a given time period (usually a second). Network bandwidth is usually expressed in bits per second (bps); modern networks typically have speeds measured in the millions of bits per second (megabits per second, or Mbps) or billions of bits per second (gigabits per second, or Gbps).
Different applications require different bandwidths. An instant messaging conversation might take less than 1,000 bits per second (bps); a voice over IP (VoIP) conversation requires 56 kilobits per second (Kbps) to sound smooth and clear. Standard definition video (480p) works at 1 megabit per second (Mbps), but HD video (720p) wants around 4 Mbps, and HDX (1080p), more than 7 Mbps.
(1) There are three types of behavior exhibited by the individuals in a group: Task-oriented behaviors contribute to the accomplishment of the task component; Maintenance behaviors contribute to the human component, i.e., the growth of interpersonal relationships; Self-oriented behaviors do not advance either the task or human component—they are negative behaviors. Johnson Graduate School of Management, Cornell University
(2) Individual behavior has two modes of communication: verbal and non-verbal. See Verbal Communication and Non-Verbal Communication.
In cost-benefit-analysis, the benefit is the total dollar value to the organization of any performance improvement as a result of the learning program.
In a statistical context, a systematic error in a test score. In discussing test fairness, bias may refer to construct problems that differentially affect the performance of different groups of people. Such problems might include the choice of words, sentence structure, or sequence of questions.
BINARY CHOICE ITEM
A non-performance test question that has only two possible choices for answers, e.g., True/False; Agree/Disagree; Yes/No.
The combination of multiple instructional strategies and/or information sources to present information and applications, and to provide feedback. Examples include combining e-learning materials and traditional print materials; leader-led and self-directed instruction; online tutorial and coaching.
Stands for “Web log”. Short messages are posted onto a Web page by individuals, thus providing an inexpensive form of knowledge sharing among experts or any individuals.
In 1956 Benjamin Bloom, an educational psychologist, developed a taxonomy of educational objectives which first divides them into three domains: Affective, Psychomotor, and Cognitive. Then each domain is further broken down into different levels of learning, with higher levels considered more complex and closer to complete mastery of the subject matter. See also Action Verbs.
Planned activities after a learning program ends that are designed to stimulate retrieval of the content presented during the program. It is a form of spaced practice.
Art Kohn reports that the optimum booster intervals are said to be 2 + 2 + 2 (2 days, 2 weeks and 2 months).
Henry Roediger reports that there is no statistical difference in improved memory whether boosters last 5 seconds, 30 seconds or 5 minutes.
A discussion method with two stages. During the initial stage, creative thinking takes precedence and learners are encouraged to generate as many ideas as possible. The second stage consists of discussing and evaluating these ideas.
A form of self-directed learning and self-directed e-learning which presents different content to different learners based on their responses.
(1) A physical room separate from the classroom used for small groups to meet and work together on an activity without disturbing others.
(2) A feature of a virtual meeting or classroom application that lets small groups of participants separate from the large group and work together in different virtual rooms. Communication can be via chat or separate phone line.
Any planned learning activity following a formal learning program that moves learners closer to the performance objective/goal. Bridging activities occur in the workplace. A bridging activity moves the learner from the end-of-learning performance level to the required job performance level while a transfer activity only reinforces on the job what was learned in the program.
A set of activities to be completed at the end of a formal learning program back on the job along with an assessment plan to determine when the performance objective/goal has been met. A bridging strategy helps learners move from the end-of-learning performance level to the required job performance level while the transfer strategy only helps learners transfer what was learned in the program to the job.
Also called, high-speed Internet. Broadband allows users to access the Internet at significantly higher speeds than those available through dial-up Internet access services. Broadband is capable of supporting full-motion interactive video applications.
The essential goals and objectives for a unit, department or organization. They are usually expressed in operational terms and are measured by hard data such as total sales, gross margins, wastes as a percentage of output and customer satisfaction. The business needs of an organization drive the performance needs.
Robinson, D.G., and Robinson, J.C. Performance Consulting Moving Beyond Training
A discussion method in which learners are divided into small groups for a short period of time. Each group has limited and specific objectives, a leader and a recorder, and a requirement that everyone contribute. The leader or recorder later reports back to the re-assembled large group.
(1) Mental, emotional and physical power or capability; the inherent capability of an individual or system to learn or perform specified actions.
(2) One of eight performance factors; a deficiency in capacity can result in below standard performance.
A detailed account of an event or a series of events presented for analysis or action by the learners. There are three major types of cases:
• Incident Process
• Critical Instance
Program and process where a learner completes prescribed learning program(s) and passes an assessment with a minimum acceptable score. To increase validity and assure authentication, the certification process should be proctored by an independent agent.
In FKA’s Instructional Systems Design Methodology, a characteristic is:
(1) an attribute of a concept; the concept is defined by its set of characteristics
(2) one of three factors used to calculate the relative priority of abilities and components: criticality, difficulty and frequency
Text-based real-time communication on the Internet. Can be used during online presentations to let participants ask and answer questions and communicate with the host and each other.
One of FKA’s performance parameters. It defines the surroundings, conditions, or situations in which an ability is performed on the job. It is identified during the performance analysis.
During active listening, if the listener does not understand what the speaker said, the listener can ask for more information to clear up his/her understanding. See also Confirming.
CLASSICAL TEST THEORY
The traditional approach to assessment which focuses on developing quality test forms. It can involve item analysis, reliability analysis and validity analysis as well as the criteria used to assemble test forms.
The process of categorizing test-takers into two or more discrete groups, such as pass/fail or master/non master.
A clear example is one of the three kinds of examples created during Concept Analysis. A clear example matches all of the characteristics of the concept.
In a consulting relationship, the client can be one person or a group of people. The client has the Money to implement the intervention, the Authority to give approvals, and the Desire to see it through to a successful conclusion (MAD).
Has a limited number of logical answers, e.g., “Which instructional strategy would you recommend in this situation?” In a written test closed questions can be implemented as True/False, multiple-choice, matching, fill-in-the-blank or short answer.
A form of on-the-job performance support. The process of providing feedback, insight and guidance to individuals to help them attain their full potential in their business or personal life. Coaching can include counseling, mentoring and tutoring activities.
The area of brain function that handles mental processes. The revised Bloom’s Taxonomy (2001) divides this domain into six levels, which from lowest to highest are: Remember, Understand, Apply, Analyze, Evaluate and Create. In general, different action verbs are used for the objectives for each of the different cognitive levels.
The study of higher cognitive functions that exist in humans, and their underlying neural bases. Cognitive neuroscience draws from linguistics, neuroscience, psychology and cognitive science. Cognitive neuroscientists explore the nature of cognition (the mental action or process of acquiring knowledge and understanding through thought, experience, and the senses) from a neural point of view.
One of three roles performance consultants can play. The collaborative consultant works jointly with the client to resolve a problem or address a business opportunity. The consultant’s specialized technical knowledge is coupled with the client’s knowledge of the organization in a joint problem-solving relationship. See also Pair-of-Hands Role and Expert Role.
A type of discussion. A small group is selected to perform a task that cannot be handled efficiently by a large group. They then report back to the large group for direction and evaluation.
COMMUNITIES OF PRACTICE
Groups of people who share a concern or a passion for something they do and learn how to do it better as they interact regularly. It has an identity defined by a shared domain of interest. In pursuing their interest in their domain, members engage in joint activities and discussions, help each other, and share information. Members of a community of practice are practitioners.
COMPETENCE, LEVELS OF
In 1982 William Howell described the four levels of competence:
• Unconscious Incompetence – “I don’t know that I don’t know how to do it.”
• Conscious Incompetence – “I know that I don’t know how to do it.”
• Conscious Competence – “I can do it, but I have to think about it.”
• Unconscious Competence – “I can do it without even thinking about it.”
In FKA’s Instructional Systems Design Methodology, a competency is a cluster of related skills, knowledge and attitudes required by a number of job categories for a very broad population, such as, computer skills or problem-solving skills. It applies to performance on the job and can be measured against well-accepted standards.
In FKA’s Instructional Systems Design Methodology, competency analysis examines various capabilities exhibited by individuals in different jobs and organizational levels, e.g., effective communication skills, or critical thinking skills.
A type of test question which requires the test-taker to complete a statement by filling in the missing words or phrases in the blank spaces. Also called fill-in-the-blank. Tests recall of knowledge.
In FKA’s Instructional Systems Design Methodology, a component is the first level breakdown of an ability.
A concrete or abstract idea that cannot be easily defined by a synonym; a group or class or objects formed by combining all of their aspects or characteristics, e.g., closed question. Concepts are taught through a series of three types of examples: clear, divergent and near non-example.
In FKA’s Instructional Systems Design Methodology, concept analysis identifies the characteristics of a concept and provides examples to clarify the definition. Concept analysis is a vehicle for confirming understanding of the concept with a subject matter expert and planning how it may be effectively communicated to others through the use of examples.
Measures the degree to which the scores on one test are related to the scores on another, already established test, administered at the same time, or are related to some other valid criterion available at the same time.
It is one of eight performance factors; the characteristics of the environment within which job performance takes place. Unfavorable conditions can result in poor performance.
A numeric range, based on a sample, within which the population scores/statistics are expected to fall a specified proportion of the time. That is, the confidence level is at least 95%. Confidence intervals are expressed as “plus or minus” a value usually between 3% and 10%. Wider intervals indicate lower precision; narrower intervals indicate greater precision.
The degree of certainty that a statistical prediction is accurate. Generally, a confidence level of 95% to 99% is considered acceptable; most researchers use 95%. A 95% confidence level means you can be 95% certain that the results from a sample, plus or minus a confidence interval, will hold true for the whole population that the sample represents.
For example, if 82% of a sample group passes a test (and your sample size was adequate), you can predict with 95% accuracy that the population will have the same results, plus or minus the confidence interval: “The population should score 82% ± 5%, 19 times out of 20 (95% of the time).”
During active listening, the listener repeats what he/she understood the speaker was saying. The speaker can then validate the listener’s understanding or add more information to clarify. See also Clarifying.
CONFUSING TFU (TEST FOR UNDERSTANDING)
A weak learning interaction, in which the TFU is a fill-in-the-blank item that is poorly constructed and has too many blanks. Even if the learner knows the correct responses it is not clear which answers fit into which spaces. There is a high probability that the learner will give at least one incorrect response even if he/she understood the content.
Restrictions affecting the project, design, development, delivery, and job environments. Constraints should be identified as early as possible in Needs Identification during the Context Analysis. Known limits on time, budget, equipment, human resources, and facility constraints are a few examples.
In FKA’s Instructional Systems Design Methodology, content analysis examines the body of information needed to perform a job, e.g., new product information or health and safety regulations.
Measures the degree to which a test measures the intended content area and samples the total of that area. It is determined by subject matter experts.
In FKA’s Instructional Systems Design Methodology, context analysis is the process of identifying factors that impact the design, development or delivery of the proposed learning program. The intent of a context analysis is to provide information to the development team that will allow them to make decisions that are effective given the project, design and development, delivery and job parameters and constraints.
Sets of project, design and development, delivery and job considerations that describe the circumstance or environment in which the learning solution must work.
CORPORATE TRAINING PLAN
See Preliminary Learning Plan.
In cost-benefit-analysis, the cost is the total dollar value of the intervention, including analysis, design, development, implementation, validation and evaluation.
The comparison of the total cost of designing and delivering the learning program with the anticipated benefit of the resulting improved performance.
In a workplace environment, this aspect of coaching should focus on helping the coachee identify and solve his or her own personal or professional problems.
In FKA’s Instructional Systems Design Methodology, one course corresponds to one responsibility in the Model of Performance (MoP).
Usually a visual representation showing the modules, and possibly the lessons within the course, and the recommended completion order for these components.
(1) The software that provides learning content and instruction via computer program.
(2) Any instructional materials used by instructors/facilitators and learners during the learning program.
The quality of being believable or trustworthy. Instructors/facilitators/coaches can achieve professional credibility if the learners believe that they have good interpersonal and communication skills, can effectively manage the learning/coaching situation and are sufficiently knowledgeable on the subject.
Gives recognition to a person for the purpose of maintaining or enhancing his/her good performance. Effective credits are more than a pat on the back or a vague statement such as, “Good job.” Effective crediting feedback provides information that helps the person maintain adequate or superior performance and motivates him/her to meet or exceed standards.
Standard by which something is measured.
In FKA’s Instructional Systems Design Methodology, the criterion test is the test at the end of a module. It should be designed to assess whether or not the module objective has been achieved. Also called a module test. See also Summative Evaluation (or Assessment).
CRITERION-REFERENCED TESTS (CRTS)
Assessment that involves measurements to divide test-takers into two or more distinct groups by comparing their scores to an established standard, not how they compare to other test-takers. Certification exams are usually CRTs not norm-referenced tests (NRTs).
A type of case study that involves a short, narrative description of an event or situation. The learner is required to explain what is being described and to provide recommended actions to be taken.
In FKA’s Instructional Systems Design Methodology, criticality is one of three characteristics used to rate the relative priority of abilities and components. The more critical an ability is to the job, the higher priority it will be given to be included in the learning program. The other two characteristics used to calculate the priority value are difficulty and frequency. See also Priority Value.
A series of related courses. In FKA’s Instructional Systems Design Methodology, the curriculum is the subset of the Model of Learning (MoL) to be included in the formal learning program after exclusions have been made. Curriculum + bridging strategy(ies) = MoL
The passing score that divides test-takes into two categories; those at or above the score, and those below. It can be used to classify test-takers into categories such as: pass/fail, qualified/unqualified, master/non master or selected/rejected.
A discussion of a controversial topic by participants who argue opposite sides of the issue.
Measures the reliability of a criterion-referenced test. In other words, if a person took the test more than once, would the classification decision (pass/fail, etc.) be the same?
See Legal Defensibility.
The output of any of the six phases of FKA’s Instructional Systems Design Methodology that require approval or sign-off, such as, a Solution Report, Model of Performance, Learning Plan or the learning materials themselves.
Any quantifiable characteristic of an individual. This may include location, department, tenure, previous learning courses attended, education, primary language, etc.
A three-step presentation method where the instructor/facilitator first describes and demonstrates an action or procedure. Then the learner is asked to narrate the instructor’s/facilitator’s second demonstration, and finally the learner describes and demonstrates the action or procedure under the supervision of the instructor/facilitator.
The third phase in FKA’s Instructional Systems Design Methodology. Design starts with the Model of Performance, refines the scope of the content to be included in the learning program, outlines the course, and finally plans the details of the design of the formal learning program, the bridging activities, and the job aids that will support the performance. The three deliverables from the Design phase are the Learning Scope, Learning Outline and Learning Plan.
The fourth phase in FKA’s Instructional Systems Design Methodology. Development starts with producing a prototype, if required, and getting it approved. Then all the materials are produced. Small sections may be tested with a few representative learners in the process called developmental testing. The Development phase ends when all materials are ready for the next phase, Implementation.
A validation activity that occurs during Development. A small piece of the learning materials is tested with individual learners or small groups of the target population. Development testing happens in parallel with the development of the learning materials.
Used to determine the exact skill levels of individual learners, as well as those areas where they are having problems. Whereas the entry test is typically used before instruction, the diagnostic test is typically used in conjunction with the instruction or as part of the post-test process.
In FKA’s Instructional Systems Design Methodology, difficulty is one of three characteristics used to rate the relative priority of abilities and components. Difficulty is based on complexity and uniqueness. The more difficult it is to perform an ability the higher the priority to include it in the learning program. The other two characteristics used to calculate the priority value are criticality and frequency. See also Priority Value.
Learning without an instructor’s/facilitator’s direct involvement. It is different from self instruction in that the learning situation is usually controlled through its structured set-up and supervision. Learners then explore on their own. See also Experiential Learning.
Applied to an individual test item, it measures how well the item predicts those who do well on the test from those who do not. That is, those who do well on this question also do well on the test or conversely, those who do poorly on the item also do poorly on the test.
This is a many-faceted method involving verbal exchanges among learners with varying amounts of participation and direction by the instructor/facilitator. It can be used in large or small groups, and can be highly structured or unstructured. There are many variations of the discussion method and the differences between them are often subtle. Five examples are:
• Buzz sessions
• Panel Discussion
An online “bulletin board” where learners and instructors/facilitators can leave messages and get responses to those messages. It can be used with e-learning courses to promote a feeling of community among learners, and to allow small-group activities.
An implementation strategy where the instructor/facilitator and learners are in physically separate locations. It can be either synchronous or asynchronous and it can include: correspondence, video conferencing, or e-learning.
The incorrect options in a multiple-choice question. The correct option is called the “key”.
Applied to each incorrect option of an individual multiple-choice question. It measures how well each distractor is performing. In other words: Is it too obviously wrong? Too confusing? Too close to the correct answer? etc.
A divergent example is one of the three kinds of examples created during Concept Analysis. A divergent example does NOT match most of the characteristics of the concept.
DOMAINS OF LEARNING
See Learning Domains
DRILL AND PRACTICE
An interactive exercise used to promote memorization of discrete facts or develop basic skills like keyboard operation. It involves the repetition of short sequences of practice with feedback as to correctness.
The predicted length of the learning program based on the estimated total number of skill and knowledge items weighted by the level of difficulty of each ability. The FKA formula states that the rate skill and knowledge items can be taught and practiced are:
• easy items – 25 per hour
• moderately difficult items – 12 per hour
• hard items – 8 per hour
If the level of difficulty is unknown, assume 15 skill and knowledge items can be taught in an hour.
Uses Internet and intranet technologies to deliver a broad array of learning solutions (information, instruction and tools) to enhance knowledge and performance. A synonym for online learning. E-Learning can be self-directed or facilitated. See also Self-Directed e-Learning and Facilitated e-Learning.
ELECTRONIC PERFORMANCE SUPPORT SYSTEM (EPSS)
Applications designed to run simultaneously with other applications or embedded within other applications that provide support for the user in accomplishing specific tasks. An EPSS can deliver job aids, and just-in-time context-sensitive information.
Skills and knowledge the learner already possesses that are relevant to the learning objectives.
Conducted at the start of a learning program to determine the level of skill and knowledge. In a fully individualized course such as self-directed e-learning, this test would allow learners to bypass instructional modules, if they pass the tests for those modules.
Used to determine if a learner already has the required skill and knowledge contained in the learning program, in which case, the learner may be exempted from the program.
A prose composition on a limited topic. Most essays in non-academic situations are 500 to 1,000 words long and focus on a clearly definable question to be answered or problem to be solved. An essay may present a viewpoint through formal analysis and argument, or it may be informal in style.
The process of assessing the effectiveness of a learning program. In 1975, Donald Kirkpatrick first presented a four-level model of evaluation which became the industry standard:
• Level 1 – Reaction
• Level 2 – Learning
• Level 3 – Behavior
• Level 4 – Results
In this model, each successive level is built on information provided by the lower level and represents a more precise measure of the effectiveness of the learning program.
The sixth phase in FKA’s Instructional Systems Design Methodology. According to Donald Kirkpatrick there are four levels of evaluation: Reaction, Learning, Performance and Results. Evaluation instruments are planned during the Design phase, produced during the Development phase, and used starting in the Implementation phase. Performance and Results data are collected after learners have returned to the job. During the Evaluation phase all collected data are analyzed.
Someone who always performs an ability at or above standards. Also called Master Performer.
The process of actively engaging learners in an authentic experience that will have benefits and consequences. Learners make discoveries and experiment with knowledge themselves instead of hearing or reading about the experiences of others. It is a type of Discovery Learning.
One of three roles performance consultants can play. The ‘expert’ consultant provides expertise and is responsible for the results, and may only collaborate with the client occasionally. See also Pair-of-Hands Role and Collaborative Role.
A class of computer program developed by researchers in artificial intelligence during the 1970s and applied commercially throughout the 1980s. It has two basic components: a knowledge base (gathered from experts in the field) and an inference engine. The system mimics an expert’s reasoning process, e.g., the system used to interpret satellite imagery to forecast weather.
FACILITATED E-LEARNING (FEL)
A form of e-learning in which an instructor/facilitator guides learning in a virtual environment often called a “virtual classroom”. Since the instructor/facilitator and learners are all logged in at the same time to the same environment this is considered a ‘synchronous’ e-learning activity.
See Population Factor.
The art of leading people through processes toward agreed-upon objectives in a manner that encourages participation, ownership and creativity by all those involved.
See Test Fairness.
(1) Completes FKA’s Systematic Learning Process—Presentation, Application and Feedback (PAF)—by providing information to the learners about their performance of the application. For example: accuracy, rate, correctness, effectiveness, etc.
(2) One of eight performance factors; lack of feedback or ineffective feedback may mean that the necessary performance improvements are not made.
A complete trial run of the test with a sample of test-takers to determine the adequacy and usability of the test itself as well as all the test procedures.
A field trip is a carefully arranged event in which a group of learners visits an object or place of interest for first-hand observation and study.
See Completion Item.
A type of blended learning that reverses the traditional educational arrangement by delivering instructional content, often online, outside of the physical or virtual classroom and moves activities, including those that may have traditionally been considered homework, into the classroom. In a flipped classroom model, learners complete reading assignments, watch online presentations and collaborate in online discussions; then they apply the information in the classroom under the guidance of the instructor/facilitator.
A data collection method using a small group of people from the target population. They are led through a structured interview process for the purpose of developing their individual and group ideas, opinions or recommendations.
Developed by Hermann Ebbinghaus in 1885 who studied the memorization of nonsense syllables, such as “WID” and “ZOF” by repeatedly testing himself after various time periods and recording the results. He was the first to demonstrate the exponential loss of memory of information that is not reinforced.
The structured learning program, which along with bridging activities, allows learners to meet the performance objectives.
The goal of formative assessment is to monitor learning to provide ongoing feedback that can be used by instructors to know when to adapt the instructional methods and/or activities and by learners to improve their learning. See also Summative Evaluation (or Assessment).
Formative Evaluation (of program materials)
Validation of learning materials conducted during its early, formative stages for the purpose of revising materials before widespread use. See also Developmental Testing and Pilot.
This is the first stage in Bruce Tuckman’s model of group development: Forming, Storming, Norming, Performing and Adjourning. During Forming, the group establishes its boundaries, terms of reference and goals. See also Storming, Norming, Performing and Adjourning.
In FKA’s Instructional Systems Design Methodology, frequency is one of three characteristics used to rate the relative priority of abilities and components. The more frequently an ability is performed on the job the higher its priority to be included in the learning program. The other two characteristics used to calculate the priority value are criticality and difficulty. See also Priority Value.
A tabular or graphical representation of a set of data, e.g., the number of times a given test score or group of scores occurs.
An application method. Games can be competitive (players compete against one another or against the instructor/facilitator) or cooperative (players collaborate to solve a problem or meet an objective). When used in learning programs, games should be content specific. Adaptations of various board games can be used to engage learners while they practice. Sophisticated computer- or video-based games used to persuade or teach are called Serious Games.
The application of game-based mechanics, aesthetics and thinking to content to engage and hold learners’ attention and promote learning, e.g., point scoring, competition with others, rewards.
See Performance Gap.
GRAPHICAL USER INTERFACE (GUI)
A way of representing the functions, features and contents of a computer program or online lesson using icons, pull-down menus and a mouse. Instructional designers may have to work within the constraints of an existing GUI or help design a new GUI for their courses, e.g., menus, navigation.
A common set of agreed upon standards for group sessions that will encourage participation and ensure everyone’s rights will be respected. For example, classes will start on time; only one person can speak at a time. Online sessions may have unique ground rules, such as, muting your phone when you are not speaking; and letting the presenter know if you will be stepping away from your computing device.
In 1965 Bruce Tuckman identified four stages in group development:
In 1977 he added Adjourning as the final stage.
A presentation method where the information is delivered by a small group of learners to the rest of the group.
A mnemonic for a coaching sequence: Goal setting, Reality checking, Option generation and Will to act. This model was developed in the U.K. in the late 1980s.
Supporting material to be used by a learner during facilitated instruction, e.g., short reading assignment, checklist, instructions, and solutions.
Text elements within online documents, usually underlined and in colored font, that can be clicked on to go to another location within the document or other location on the Internet.
(1) Morgan McCall and his colleagues working at the Center for Creative Leadership (CCL) are usually credited with originating the 70:20:10 ratio. Two of McCall’s colleagues, Michael M. Lombardo and Robert W. Eichinger, published data from one CCL study in their 1996 book The Career Architect Development Planner.
(2) Based on a survey asking nearly 200 executives to self-report how they believed they learned, McCall, Lombardo and Eichinger’s surmised that: “Lessons learned by successful and effective managers are roughly:
• 70% from challenging assignments
• 20% from developmental relationships
• 10% from coursework and training
(3) More recently the learning model is explained as:
• 70% of what workers need to know and do is learned experientially doing their jobs
• 20% is learned socially through interactions with others
• 10% is learned during formal learning programs
A short planned activity whose purpose is to create a comfortable atmosphere that makes participants feel more at ease with each other and the instructor/facilitator. Icebreakers include: energizers, tension relievers, games, brain teasers and getting-acquainted exercises.
The fifth phase in FKA’s Instructional Systems Design Methodology. During this phase pilots are conducted, formal learning programs are disseminated or delivered, and bridging activities are completed by the identified target population.
A type of case study application where material is placed in the learner’s in-basket and he/she must take whatever action is necessary to place it in the out-basket.
Instructors/facilitators and learners are in the same location at the same time, e.g., traditional classroom instruction. Sometimes called face-to-face (F2F) delivery.
One of eight performance factors. Incentives include any forms of financial or recognition rewards. Financial rewards include: salaries, bonuses and stock sharing awards. Recognition rewards include: preferred work assignments, locations and shifts; time off; discretionary treatment; and non-monetary awards. A deficiency in any of these can lead to poor performance.
A type of case study application activity where an incomplete description of a situation is presented. Learners must determine what facts or materials are missing and ask the instructor/facilitator to provide them; however, they are given only the information they specifically ask for and nothing extra.
A presentation method involving one learner who prepares a short lesson on a topic and presents it to the rest of the group.
The second step in the Presentation component of the Systematic Learning Process. It is the process by which the relevant content is communicated to the learner. A variety of presentation methods can be used.
Any action from the learner that is recognized by the e-learning courseware. It may come from a touch-screen; keyboard, mouse or other peripheral.
FKA’s Instructional Systems Design Methodology identifies six instructional strategies: Leader-Led (LL), Self-Directed Learning (SDL), e-Learning, On-the-Job (OJT), Self Instruction (SI) or Stand-Alone Job Aid. To select the most appropriate strategy for a situation, you must consider the instructional strategy framework, along with the content itself and any context constraints.
INSTRUCTIONAL STRATEGY FRAMEWORK
When selecting an appropriate instructional strategy for a given situation you look at the need for: group vs. individual instruction, facilitated vs. unfacilitated instruction, and local vs. remote delivery mechanisms. When defining an e-learning solution, the need for synchronous vs. asynchronous communication is also considered.
INSTRUCTIONAL SYSTEMS DESIGN (ISD)
(1) An orderly design process moving from analysis, to design, to development, to implementation and evaluation, often referred to as ADDIE. Also known as Systems Approach to Training (SAT). Some organizations use an AGILE approach to instructional design.
(2) FKA has its own ISD Methodology that starts with Needs Identification, a critical phase that contains all the required pre-project investigations, recommendations and decisions. Another difference is FKA’s analysis phase which defines four types of performance analysis (job, competency, content and concept) that are synthesized into a comprehensive Model of Performance. Finally, the FKA cycle shows the validation steps at all stages.
INSTRUCTOR-LED TRAINING (ILT)
See Leader-Led Instruction.
Document that directs the instructor/facilitator in the presentation, application and feedback components for the course. See also Lesson Plan.
A tool used to collect and organize information, e.g., a questionnaire, scale, test.
The two-way flow of information between two or more people. In the physical classroom, the instructor/facilitator makes statements, asks questions or creates situations to which the learners respond. Can be used to draw content from the learners. Meaningful interaction keeps learners engaged and improves both retention and transfer of the new skills and knowledge to work.
A style of instruction used during the information transfer portion of FKA’s Systematic Learning Process that keeps learners actively involved in the learning through the “VIVE” formula:
• Variety – Use different presentation methods and media to appeal to different learners and keep interest high.
• Interaction – Ask learners meaningful questions and provide activities to involve learners in building content.
• Visuals – Support the content with colored charts, tables, pictures, models, props, video and animation to enhance clarity and support retention.
• Examples – Use relevant examples that are appropriate to the audience to clarify content and keep learners motivated.
VIVE should be a goal for all instructional strategies. See also Motivation.
The two-way flow of information between a computer and a user. During self-directed e-learning the learner provides input in response to the program. Meaningful interactivity keeps learners engaged and improves both retention and transfer of the new skills and knowledge to work.
(1) Originated as a computer term. Interleaving is a technique used to improve performance of storing data by putting data accessed sequentially into non-sequential sectors.
(2) In learning design instead of presenting content in a series of blocks of content (AAA, BBB, CCC), the content is broken up into smaller chunks and practiced in parallel. Interleaving mixes practice on several related skills together (ABC, ABC, ABC). Neuroscientists believe that practicing related skills or concepts in parallel is an effective way to improve memory.
Applied to a set of items on a test to measure the reliability that the scores of the individual items correlate with one another. In other words, one criteria for a test to be deemed reliable is that the scores on individual questions are similar to each other.
Applied to performance-based tests or questions scored by human raters. It measures how consistent and dependable the scores are across raters.
(1) A formal or informal meeting in which the person initiating the discussion solicits information from a person or group of people. Interviews can be conducted face-to-face, over the phone or using virtual meeting software.
(2) A Presentation method where one or more experts are questioned by one or more learners.
Irrelevant TFU (Test for Understanding)
A weak learning interaction in which the learner is asked to give an answer or response that is NOT relevant to the skills or knowledge to be learned, e.g., the test for understanding questions concern nice-to-know content that is not critical to performance.
A statement, question, exercise or task on a test for which the test-taker must provide some form of response.
Statistical analysis that is applied to individual items to assess their quality. It may also involve item difficulty analysis and distractor analysis.
ITEM DIFFICULTY INDEX
Applied to an individual test item, it indicates the difficulty by measuring what proportion of the test-takers answered the question correctly.
The name used by an organization to identify a grouping of related activities performed by individuals within their profession or occupation.
(1) A performance support system used on the job that assists the performer by summarizing the necessary skills and knowledge. It usually presents the information in a quickly read and understood format, i.e., flowcharts, checklists, decision tables, worksheets, etc. See also Stand-Alone Job Aid.
(2) During FKA’s performance analysis, existing job aids currently in use should be identified and added as one of the parameters for the ability.
(1) A general term referring to the investigation of an individual job to document in an orderly manner all the skill and knowledge required to be successful on the job, including working conditions, mandatory standards and available support.
(2) FKA considers job analysis to be just part of performance analysis that results in a Model of Performance for a job. Performance analysis may include any of the four types of analysis: job, competency, content, and concept.
See also Performance.
The correct option in a multiple-choice question or other similar item such as True/False.
The space through which we move. This space can be broken down into four zones: public, social, personal and intimate.
The precise size of these zones can vary with situations, individuals and cultures. Instructors/facilitators/presenters are generally working in the learners’ social and public spaces but can enter their personal spaces when they want to promote a connection or gain commitment.
The sum of what is known; a body of truths, principles and information.
In FKA’s Instructional Systems Design Methodology, a knowledge item is the smallest unit of information required to perform an ability.
A task based on a cognitive skill set, i.e., one which shows that a learner knows how to do the task without actually performing the activity.
A test—often a written test—that measures knowledge and NOT performance. Also called non-performance test.
Creating, capturing, organizing and storing knowledge and experiences of individual workers and groups of workers within an organization and then making it available to others in the organization.
LEADER-LED INSTRUCTION (LL)
Instructional strategy in which an instructor/facilitator presents the information, conducts the application and provides feedback. It is usually directed toward a group rather than at an individual, although one-to-one tutoring is also a form of leader-led instruction.
State what the learners will be able to do at the end of the lesson or module, NOT what the instructor/facilitator will do. For example: “Upon successful completion of this lesson, learners will be able to add a new client to the sales data base.” NOT “During this lesson the instructor/facilitator will demonstrate how to add a new client to the sales database.” The first objective gives you something specific and measurable against which you can judge learners’ behavior.
A device or information source used during the learning period that guides or assists the learning process, for example, handouts that support classroom or OJT activities. It is NOT a job aid.
LEARNING CONTENT MANAGEMENT SYSTEM (LCMS)
Manages the process of creating, storing and maintaining learning content. The components of an LCMS are: an authoring application (editors), a learning object repository, and administration tools. Some LCMSs are adding the functions found in Learning Management Systems.
The manual given to learners participating in the learning program. It contains more than copies of the presentation slides, for example, worksheets and exercise instructions. It may act only as a follow-along guide to be used during the program or include references and job aids to be used back on the job. There is usually an index and/or table of contents. Also called a Participant Manual.
Learning can occur in three domains: affective (emotional), cognitive (mental) and psychomotor (physical). See also Bloom’s Taxonomy.
LEARNING, EVALUATION OF
See Level 2 Evaluation – Learning.
See Learning Program.
A small group of teaching points presented as a unit and followed by a test for understanding. In FKA’s Instructional Systems Design Methodology, more complex lessons may be broken down into learning interactions.
See Learning Program.
LEARNING MANAGEMENT SYSTEM (LMS)
Manages the documentation, tracking, reporting and delivery of learning. Some LMSs are adding the functions found in Learning Content Management Systems.
Identify what people will need to learn and transfer to the job to enable them to meet the performance needs of the organization.
Robinson, D.G., and Robinson, J.C. Performance Consulting Moving Beyond Training
A self-contained piece of learning material with an associated learning objective, which could be of any size and in a range of media. Learning objects can be re-used and combined with other objects for different learning purposes. To improve the reusability of online learning objects, or to integrate them into Learning Content Management Systems and Learning Management Systems a specific data format, SCORM, is used.
An organization that is able to transform itself by acquiring new knowledge, skills, or behaviors. In successful learning organizations, individual learning is continuous, knowledge is shared, and the culture supports learning. Employees are encouraged to think critically and take risks with new ideas. All employees’ contributions are valued.
The second design document produced in the Design phase of FKA’s Instructional Systems Design Methodology. It communicates structure and outcomes. It contains the module and lesson outlines, grouped and sequenced in the order they will be taught. A Learning Outline may include estimates of the formal learning time.
The final document produced in the Design phase of FKA’s Instructional Systems Design Methodology. It expands on the Learning Outline to include ordered learning interactions; presentation, application and feedback methods; media; planned times; and introductions and conclusions for the lessons, modules and the course.
A formal course, module or lesson designed to improve performance by incorporating all the fundamentals of effective instructional design. It can be delivered in any of the six instructional strategies or combination of those strategies. Also referred to as a learning initiative or learning intervention.
The first design document produced in the Design phase of FKA’s Instructional Systems Design Methodology. It identifies the content to be included in the formal learning program. It is a subset of the Model of Performance with some abilities excluded from the program for a variety of reasons.
The manner in which a learner perceives, interacts with, and responds to the learning environment. Components of learning style are the cognitive, affective and physiological elements, all of which may be strongly influenced by a person’s cultural background. Included also are perceptual modalities, information processing styles and personality patterns.
Note: There is substantial criticisms of learning style theories from scientists who have reviewed related research and not found evidence to support the theories.
See Bloom’s Taxonomy.
The degree to which learners retain the knowledge and apply the skills achieved during the formal learning program to the workplace. Transfer of learning is affected by: the learners’ motivation to apply the skill and knowledge, the effectiveness of the program + transfer activities, and degree of support in the workplace.
A presentation method where an expert delivers a prepared presentation on a specific topic. It is essentially one-way communication with the instructor, facilitator or lecturer as the source of all the information.
The extent to which there is evidence to demonstrate the reliability and validity of a test—the more data the better. To defend a test in a court of law you must use and document sound measurement procedures including data collection and analysis throughout the design, development, administration and maintenance of the test. Content validity is particularly important to legal defensibility.
In FKA’s Instructional Systems Design Methodology, the lesson is the part of a module that prepares a person to perform a component. It is the smallest unit of instruction to include presentation, application and feedback. Long and/or complex lessons can be broken down into learning interactions.
In FKA’s Instructional Systems Design Methodology, the lesson objective describes the performance outcome to be achieved upon completion of a lesson. It is derived from a component statement and may be identical if the performance can be achieved with a formal learning program.
In FKA’s Instructional Systems Design Methodology, the lesson outline is an overview of a lesson which specifies the objective, test/application, and instructional strategy along with a content outline. When module outline and course outline information is added it becomes the learning outline.
(1) Written guide for instructors/facilitators that includes such things as: timing estimates; lists of equipment and media; instructions on how to present the content, conduct the applications, debrief and give feedback. FKA calls these Instructor/Facilitator Guides.
(2) In FKA’s Instructional Systems Design Methodology, the lesson plan is more detailed than the lesson outline. It includes: the final order for the learning interactions, the presentation methods, the design for the lesson introduction and conclusion, timing and media selections. It may also contain questions used to test for understanding. When module plan and course plan information is added it becomes the Learning Plan.
A test used to determine whether or not learners can meet the lesson objectives. It is also called a sub-criterion test and is part of a larger module test (criterion test).
LEVEL 1 EVALUATION: REACTION
Measures how the learners react to the learning program.
LEVEL 2 EVALUATION: LEARNING
Measures the extent to which learners change attitudes, improve knowledge and/or increase skill as a result of the learning program. The criterion and sub-criterion tests are administered to see whether the module and lesson objectives were attained.
LEVEL 3 EVALUATION: PERFORMANCE
Measures the extent to which change in behavior has occurred after the learning program. It evaluates the learners’ performance on the job following the program, using performance indicators of some type and is usually done a few months after program is completed.
LEVEL 4 EVALUATION: RESULTS
Measures the business results for the organization that can be attributed to the learning program. It usually involves performance indicators or organizational data, possibly in the form of regular reports.
A type of response format used in surveys developed by Rensis Likert. Likert items have responses on a continuum and response categories such as: “strongly agree,” “agree,” “no opinion,” “disagree”, and “strongly disagree.”
Any form of self-directed instruction in which all learners cover exactly the same content in the same sequence.
A transition statement that links an upcoming module or lesson to the previous one, e.g., “Now that you can log on to the sales data base, you are ready to search for your client’s records.” A good link-in gives a quick overview and prepares learners for what comes next. You may not need a link-in if you just ended the previous module or lesson with a link-out.
A transition statement that links a module or lesson just ending to the upcoming one, e.g., “This afternoon you have had a chance to practice responding to some common customer complaints. Tomorrow, you will learn how to recognize more serious complaints that should be forwarded to your supervisors.” A good link-out gives a quick review and prepares learners for what comes next. You may not need a link-out if you will immediately give a link-in to the next module or lesson.
A facilitated synchronous e-learning event delivered in a virtual classroom.
One part of the local vs. remote consideration for selecting an instructional strategy. Answers: “Are learners in one location or spread out geographically?” If learners are close together, facilitated in-person classes are more feasible.
Information retained in the brain and retrievable over a long period of time, often over the entire life span of the individual. It is not literal like short-term memory; often stores concepts, relationships, etc. New information is integrated with existing.
The acronym to help identify the client in a consulting relationship. The client has the Money to implement the intervention, the Authority to give approvals, and the Desire to see it through to a successful conclusion. See also Client.
One of three types of types of behavior exhibited by individuals in a group. Maintenance behaviors contribute to the human component, the growth of interpersonal relationships. They include: encouraging communication, setting standards, mediating, observing processes and compromising. See also Task-Oriented Behavior and Self-Oriented Behavior.
Johnson Graduate School of Management, Cornell University
Assesses the test-takers’ ability to recognize the correct matches between two lists of items. Can also test higher levels of cognition such as apply and analyze.
One of eight performance factors. In this context, it refers to the gauge, method/process and results of assessed performance. Measurements should be taken against existing performance standards, i.e., the Model of Performance. A deficiency in measurement can result in poor performance.
Tools used for storing and communicating information. Types include: text, audio, graphic, video, animation, etc. Delivery formats can be generally split into paper or electronic. Distribution modes of electronic materials include: DVD, USB flash drive or Internet.
The faculty by which the brain stores and retrieves facts, events, impressions, etc. New information is first screened by the sensory register, passed to short-term memory and, if appropriate, transferred into long-term memory to be stored for later retrieval. The goal of learning programs is to get as many skill and knowledge items as possible into learner’s long-term memory. See also Sensory Register, Short-Term Memory and Long-Term Memory.
One type of coaching activity. Mentoring has its roots in apprenticeship where an older, more experienced person passes on skills and knowledge, experience and wisdom about a particular job or about the workplace in general.
A way of delivering content to learners in small (generally less than 5 minutes), very focused chunks. These chunks could provide, among other things: content presentation, practice, review, performance support. Computers with Internet access and mobile devices are well suited to delivering the lessons to the learners. Learners are in control of what and when they’re learning.
MISSING TEACHING POINT
A weak instructional item in which the learner is expected to answer a question when insufficient content or teaching points have been presented to allow a correct response.
M-LEARNING (MOBILE LEARNING)
Learning takes place via such wireless devices as smart phones, tablets, or laptops.
One type of media. It is a three-dimensional physical or virtual representation of a person, object, equipment or structure.
MODEL OF LEARNING (MOL)
In FKA’s Instructional Systems Design Methodology, the MoL specifies the curriculum, courses, modules and lessons that are required to prepare an individual to achieve the performance objectives specified by the Model of Performance.
MODEL OF PERFORMANCE (MOP)
In FKA’s Instructional Systems Design Methodology, the MoP specifies the set of abilities required by the population to meet an identified performance goal. It must be documented early in the process and used to prepare the objectives and define the content of the tests. Later it is used to validate the content of the test. If more than one type of analysis was completed, the MoP integrates data from the different analyses.
A method for setting the cut or passing score of a test. A panel of judges (subject matter experts) review a test, item by item. Each judge estimates what proportion of minimally competent test-takers would get the correct answer. These proportions are then averaged across items and judges to determine the panel’s recommended passing score.
In FKA’s Instructional Systems Design Methodology, a module is the part of a course that prepares a person to perform an ability.
In FKA’s Instructional Systems Design Methodology, the module objective describes the intended outcome of a formal learning module. It is based on a performance objective and, in fact, can be identical to it, if that performance can be achieved by the end of the formal learning program.
In FKA’s Instructional Systems Design Methodology, a module outline is an overview of a module which includes the module objective, the module test/application, the instructional strategies, and the outlines of the lessons that comprise the module. When other module outline information and course outline information is added it becomes the Learning Outline.
In FKA’s Instructional Systems Design Methodology, a Module Plan is created from the Lesson Plans and includes the module introduction and conclusion and details about their timing and media required. When other module plan information and course plan information is added it becomes the Learning Plan.
In FKA’s Instructional Systems Design Methodology, a module test (or application) is used to measure attainment of the module objective. It is derived directly from the module objective and must match both performance and conditions. It is also called a criterion test.
(1) The drive to accomplish an action. Unsatisfied needs motivate. On the biological level, basic human needs of food, shelter and survival are powerful motivators. On the psychological level, people need to be understood, affirmed, validated and appreciated. On business (and learning) levels, motivation occurs when people perceive a clear reason for improving their performance.
(2) The first step in the Presentation component of the Systematic Learning Process.
(3) In learning programs, it is important to establish the learner’s initial motivation early and to maintain it throughout. Learners’ motivation can be achieved by: (a) telling them how the topic is relevant to them; (b) asking them why it is relevant; and (c) using real-world examples to show how this topic has helped before. See also WIIFM.
(4) During the Information Transfer portion of FKA’s Systematic Learning Process it is essential to keep learners motivated. Ongoing motivation is maintained through application of the VIVE formula: Variety of methods, Interaction through effective questioning, Visuals that support the content, and Examples that reinforce relevance. See also Interactive Instruction and VIVE.
A performance involving physical movement or manipulation.
The integration of different media, including text, graphics, audio and animation into one program.
A multiple-choice item is made up of a stem (the question or statement), and several possible options. The test-taker must identify the correct option(s), referred to as the key(s). The incorrect options in a multiple-choice question are called distractors. Multiple choice items are common on exams, particularly standardized tests because they are easily marked and can test a large number of teaching points in a relatively short time.
• Single-select multiple-choice items have only one correct answer.
• Multiple-select multiple-choice items may have two or more correct answers.
MULTIPLE ROLE PLAYING
One type of role play. The class is divided into small groups and all learners are players. Each player is given a written role or an assignment and the entire class role plays at the same time.
One of the three kinds of examples created during Concept Analysis. A near non-example matches many of the characteristics of the concept but fails to match at least one.
See Performance Needs Analysis and Training Needs Analysis.
NEEDS IDENTIFICATION PHASE
The first phase in FKA’s Instructional Systems Design Methodology. It is broken into two sub-phases—Determine Needs and Plan Project. During Determine Needs you identify the performance target; describe the population and work context and evaluate current performance; determine the cause of any performance gaps and recommend solutions. The deliverable here is called a Solution Report. If learning is a solution, you proceed to Plan Project sub-phase where you analyze the learning context; scope the project; develop a work plan; and analyze costs and benefits. The deliverable here is called a Preliminary Learning Plan.
See Cognitive Neuroscience
See Knowledge-Based Test.
A solution for correcting poor performance that does not include formal learning, e.g., defining standards, increasing incentives, hiring more people.
Any communication not expressed in words, i.e., facial expression or body language.
This is the third stage in Bruce Tuckman’s model of group development: Forming, Storming, Norming, Performing and Adjourning. During Norming, the group is able to focus on the task now that rules and roles are clear. See also Forming, Storming, Performing and Adjourning.
NORM-REFERENCED TEST (NRTS)
A test that compares a test-taker’s score to that of the other test-takers in the norm group. For example, standardized educational tests are NRTs while certification tests are usually criterion-referenced tests (CRTs).
FKA defines three types of objectives:
• Performance Objective – describes the required performance on the job
• Module Objective – what learners will be able to do at the end of a module
• Lesson Objective – what learners will be able to do at the end of a lesson
A data collection method. An observer watches someone’s performance and compares it to a performance checklist. The performance checklist is based on the Model of Performance.
Reference information delivered through computer software. Most online Help is designed to explain how to use a software application or operating system, but can also be used to present information on a broad range of subjects.
Any learning experience or environment that relies upon the Internet as the primary delivery mode. See also e-learning.
ON-THE-JOB TRAINING (OJT)
One of the instructional strategies where a designated OJT supervisor, mentor or coach delivers and monitors the program: demonstrating new skills, observing the learner’s performance and giving constructive feedback. An effective OJT program includes a series of structured activities with learner and supervisor materials to support the program.
Allows learners to respond in their own words, e.g., “Describe the best way to establish an instructor’s/facilitator’s credibility?”
What the learners will be able to do at the end of a module or lesson.
A weak instructional item which uses excessive hints, cues or prompts in assisting the learner to perform as desired.
One of three roles consultants can play. The ‘pair-of-hands’ consultant assists the client who maintains all responsibility and control. See also Expert Role, Collaborative Role and Consulting Roles.
A structured conversation on a given topic among several panel members (usually subject matter experts or stakeholders) in front of an audience. A facilitator/moderator (or instructor acting in that capacity) directs the discussion among the panel members following an agreed upon format and then opens the discussion up to audience questions.
See Alternate forms.
PARALLEL FORMS RELIABILITY
See Alternate Forms Reliability.
See Performance Parameters.
Demonstrates that the listener understood what the speaker intended by restating the message using different words. This gives the speaker an opportunity to add more in the way of clarification. See also Confirming.
See Cut Score.
A person’s demonstrated ability to do a job. It is the sum of skills and knowledge and is affected by other performance factors, such as: capacity, attitude, standards, measurement, feedback, conditions and incentive.
In FKA’s Instructional Systems Design Methodology, this is the process of gathering and analyzing performance data to create a Model of Performance (MoP).
Activity designed to prove that a learner can actually do the task, not simply know how to do it.
A test which measures the learners’ ability to perform skills and apply knowledge while performing the skills.
A checklist of abilities derived from the Model of Performance (MoP). Includes all performance parameters (Circumstances, Standards, Tools and References, Support (human) and Job Aids).
A person who identifies performance needs within an organization and provides services that result in performance improvement. Learning is only one possible solution to performance problems.
PERFORMANCE, EVALUATION OF
See Level 3 Evaluation – Performance.
There are eight factors that impact performance: skill and knowledge, capacity, attitude, conditions, incentives, standards, measurement and feedback. Inadequacies with any of these can lead to performance problems. Learning should only be selected as the solution when lack of skill and knowledge, and sometimes poor attitudes, are causing the performance problem.
(1) During FKA’s Needs Identification phase, performance needs analysis is conducted to determine if there is a gap between the required performance and current performance.
(2) During FKA’s Design phase the gap between the performance objective and the module objective is identified. This gap can be narrowed to zero through bridging activities back on the job.
A systematic process of analyzing performance gaps, planning for future improvements in performance, designing and developing interventions to close the gaps, implementing the interventions, and evaluating the financial and non-financial results.
The measurable products of performance—what is created or accomplished. For example, sales results or absenteeism.
Describe what people actually have to do on the job in order to meet the business needs of the organization.
Robinson, D.G., and Robinson, J.C. Performance Consulting Moving Beyond Training
PERFORMANCE NEEDS ANALYSIS
An investigation to determine if there is a gap between the required performance and current performance. It is carried out during the Needs Identification phase of FKA’s Instructional Systems Design Methodology. If there is a performance gap, further study is done to determine which performance factors are most likely cause of the performance problem.
In FKA’s Instructional Systems Design Methodology, a performance objective defines exactly how an ability must be executed in the workplace. It is derived directly from the ability statement and performance parameters in the Model of Performance (MoP).
In FKA’s Instructional Systems Design Methodology, parameters provide details about the performance of an ability or component. The five parameters are: Circumstances, Standards, Tools and References, Support (human) and Job Aids.
The cause of the performance gap. It is usually based on one or more of the eight performance factors, e.g., lack of skill and knowledge, or the standards are not understood.
Once the probable cause of the performance gap has been identified, a performance solution is recommended, e.g., a learning program to improve skill and knowledge, or implementing a multichannel communication plan to broadcast the standards.
This is the fourth stage in Bruce Tuckman’s model of group development: Forming, Storming, Norming, Performing and Adjourning. During Performing, the group is ready to take action. See also Forming, Storming, Norming and Adjourning.
The FKA Instructional Systems Design Methodology has six phases. Each phase is a grouping of activities with a specific purpose and well-defined deliverables. Sign-off or approval signifies the completion of each phase.
A full presentation of the learning program in the required environment with a group of the target learners to test the course.
Applied to the one or more test items that are tested with a small sample of test-takers. Data is collected and analyzed.
A small piece of software that adds functionality to an Internet browser, e.g., streaming audio or video.
Comes from Apple’s “iPod” and “broadcasting”. It is a method of publishing files—usually audio files—to the Internet. Users subscribe to a feed and receive new files automatically, usually at no cost.
A statistical technique used to determine the discrimination index of an item.
A feature of some virtual meeting and classroom software that lets the host/presenter prepare multiple choice or binary choice questions—preferably ahead of time—to be used during the online session. The software will compute the results that can be shared immediately with all attendees.
Identifies human factors that affect work performance and/or the ability to learn. In FKA’s Needs Identification phase, population analysis is concerned with the human factors affecting job performance. In FKA’s Analysis phase, population analysis focuses on the people needing training and looks at the human factors affecting learning.
Characteristics used to describe the population. Data is gathered for the most critical population factors for the given project and the results analyzed according to the scale used. Factors may affect work performance or the ability to learn.
The summary report for population analysis. It identifies and describes the population and summarizes the data collected. It is one deliverable from FKA’s Analysis phase.
Conducted at the end of the learning program to assess learner mastery. Test items are based on the final module objective. See also Pre-Test/Post-Test.
A type of application in which learners apply their skills and knowledge to solve a problem thus demonstrating competence or subject mastery. Practical exercises usually follow the presentation component of the lesson.
Measures the degree to which the scores from a test predict the scores of future performance for the same group of subjects after the required period of time has elapsed. For a test to have predictive validity, its results must strongly correlate with the results of the later performance indicator.
PRELIMINARY LEARNING PLAN
The final deliverable from FKA’s Needs Identification phase. It identifies the proposed learning program and describes the learning context. It contains the proposed primary instructional strategy(ies), an estimate of the length of the learning program, a project plan and preliminary cost benefit analysis. Based on this document management should be able to make decisions whether or not to move forward with the project and whether or not to complete the work with internal resources or generate a Request for Proposal and hire external resources.
Something that learners must know or already be able to do satisfactorily before the learning program begins as it is not included in the program.
An assessment done prior to instruction to ensure that learners possess the minimal skills needed to succeed because these skills will not be covered during the learning program.
The first component of FKA’s Systematic Learning Process. Presentation begins with establishing the initial motivation for the learners, then the information is transferred and finally some questions are asked to test for understanding. Presentation is followed by Application and Feedback.
Those methods which are used during Presentation to transfer the information, e.g., lecture, demonstration, reading assignment.
Pre-Test: An assessment done prior to instruction to determine the level of skills or knowledge that a learner brings to instruction.
Post-test: An assessment done after the learning program to determine the change in learner ability or knowledge when compared to the pre-test data gathered before the start of the course.
In FKA’s Instructional Systems Design Methodology, the primary type of analysis establishes the fundamental organization of the Model of Performance (MoP) data. Data from any secondary analysis is integrated into this fundamental structure at key points. There are four possible types of analysis: job, competency, content and concept.
In FKA’s Instructional Systems Design Methodology, the priority is a value (1-36) representing the relative importance of an ability. It is calculated from the criticality, difficulty and frequency ratings.
Once the performance gap has been isolated, the probable cause of the gap is identified. It is usually based on one or more of the eight performance factors, e.g., lack of skill and knowledge, or the standards are not understood.
The group whose performance needs improving. Also called the Target Group.
Require learners to think about and use information that was presented previously. They involve the higher cognitive levels in Bloom’s Revised Taxonomy, i.e., Apply, Analyze, Evaluate and Create.
Document which outlines the various steps for each phase required to create, implement and evaluate the learning program. It should also include deadlines for each step, approval points and description of the resources needed.
Samples of learning materials created to get approval on their look and feel, and functionality. They show the style, level of detail, and quality of materials. Once approved they become the standard or model for developing the remaining materials. Prototypes are produced at the start of FKA’s Development phase.
The area of brain function that controls physical movement and coordination. See also Bloom’s Taxonomy.
Measures the probability (ranging from 0 to 1) that the results of an assessment occurred by chance alone. A result is conventionally regarded as “statistically significant” if the likelihood that it is due to chance alone is less than 5 times out of 100 (P < 0.05).
QUESTION AND ANSWER (Q&A)
A presentation method where the instructor/facilitator asks prepared questions in order to cover a subject. It can also be reversed, with learners asking the questions and the instructor/facilitator giving the answers or referring them back to other learners to answer.
There are three dimensions for categorizing questions:
• Content: Theory versus Application questions
• Format: Open versus Closed questions
• Level: Recall/Recognition versus Processing questions
Asking Questions: Instructors/facilitators can ask:
• Overhead questions – a question asked of the whole group
• Direct questions – a question asked of one person
Responding to Questions: Instructors/facilitators can:
• Answer the question themselves
• Defer the question to a later more appropriate time
• Reverse it, asking the questioner what he/she thinks the answer is
• Relay it on to another learner or to the whole class
Instructors/facilitators should attempt to relay or reverse as many questions as possible in order to draw content from the learners and to maximize interaction.
A method of data collecting; a set of written questions which calls for responses on the part of the respondents; may be self-administered or group-administered; may be paper-based or electronic.
REACTION, EVALUATION OF
See Level 1 Evaluation – Reaction.
A presentation method in which learners cover teaching points by reading about them rather than by having the instructor/facilitator or a narrated, self-directed e-learning program present them.
Require learners to remember information that was presented previously. They involve the lower cognitive levels in Bloom’s Revised Taxonomy, i.e., Remember and Understand.
One of FKA’s performance parameters. It identifies any documentation used in performing an ability such as SOPs (Standard Operating Procedures) or regulations. Job Aids are listed as a separate parameters.
See Test Reliability.
One part of the local vs. remote consideration for selecting an instructional strategy. It answers: “Are learners in one location or spread out geographically?” If learners are spread across a wide area, even across the globe, some form of e-learning might be the most expedient and cost effective solution.
The behavior of a learner when asked a question or when requested to perform some activity.
In FKA’s Instructional Systems Design Methodology, a responsibility is a job function or category of job requirements, such as, supervision or reporting. Each responsibility is broken down into abilities in the Model of Performance (MoP). A responsibility forms the basis of a course during the learning program.
RESULTS, EVALUATION OF
See Level 4 Evaluation – Results.
The act of storing and making information available for retrieval from long-term memory. Factors that impact learner retention are: their level of interest, repetition of information, association of new information to existing information, use of multiple channels, and allowing time for learners to reflect and process this new information.
Process in which information in your long-term memory is recalled. The act of retrieving information strengthens the memory traces making it easier to retrieve the next time.
RETURN ON INVESTMENT
A question that requires no answer from the listener because, either the answer is obvious or the asker plans to answer it right away.
The total benefit of an intervention divided by the cost. An ROI of greater than 1 indicates more benefit than cost.
An application method in which learners are assigned roles and asked to act out a real life situation using guidelines provided by the instructor/facilitator. It can be relatively structured or spontaneous. Two examples of role playing are: multiple role playing and role rotation.
Players in a role play switch allowing different people to each have a turn playing the various roles.
Stands for: RDF (Reserve Description Framework) Site Summary, or Rich Site Summary, or Really Simple Syndication. RSS is a protocol which provides an open method of publishing frequently updated Internet content. Using RSS files, publishers create a data feed that supplies headlines, links, and article summaries from their Website. Subscribers automatically receive this data feed.
A scoring guide used in subjective assessments. It implies that a rule defining the criteria of an assessment system is followed during the assessment. A scoring rubric makes explicit expected qualities of performance on a rating scale.
A subset of the population.
The specific size of the subset of the population being studied. Generally, the larger the sample size, the more reliable the results, and the more likely it is that the results can be applied to the whole population.
The part of the brain that screens input from all senses depending on current focus. It passes relevant information to short-term memory.
SCENARIO-BASED QUESTION (SBQ)
A type of multiple-choice question based on situations the learner will face back on the job. The SBQ first describes a situation, usually in the form of dialog between two or more people. The situation is followed by one or multiple choice questions. SBQs let learners apply new skill and knowledge to workplace situations, thereby helping the transfer to the job.
A specific number resulting from the assessment of an individual test-taker.
SCORM (SHARABLE CONTENT OBJECT REFERENCE MODEL)
A collection of standards and specifications for Internet-based e-learning content. It defines how the individual instruction elements are combined on a technical level and sets conditions for the software needed for using the content. SCORM is a specification of the Advanced Distributed Learning (ADL) Initiative from the Office of the United States Secretary of Defense.
Software that enables users to search the Internet using keywords. It has made all the information on the Internet instantly searchable by anyone with computer access. Google is a commonly used search engine.
A method of self-study in which learners in an online lesson are presented with content, asked a question, make a response, and receive immediate feedback while proceeding at their own pace. The computer program judges the learners’ responses and provides specific, evaluative and corrective feedback; it can also be programmed to adapt the content and questions to meet the needs of individual learners. Since learners log into the program independently and at a time of their choosing, this is considered an ‘asynchronous’ e-learning activity.
SELF-DIRECTED LEARNING (SDL)
A method of self-study in which learners are presented a paper-based lesson with content, questions to answer, and sample solutions against which to check their own work. Learners proceed at their own pace. SDL presented online is called Self-Directed e-Learning (SDeL).
A method of instruction in which learning resources and materials are identified and learners are responsible for meeting the performance objective. Considered informal instruction.
One of three types of behaviors exhibited in groups. Self-oriented behaviors do not advance either the task or human component. These are negative behaviors that include: blocking, dominating, verbal attacks, playing, withdrawing, self-seeking and pleading. See also Task-Oriented Behavior and Maintenance Behavior.
Johnson Graduate School of Management, Cornell University
The ultra-short-term memory that takes in sensory information through the five senses (sight, hearing, smell, taste and touch) and holds it for no more than a few seconds.
Online games used to persuade or educate. They may be simulations which look like games but are used to teach things like health care, business processes or military operations. Serious games are highly motivating and engaging and depending on their sophistication can provide a total immersion experience for players/learners.
Determining the order in which course content will be covered. The natural job order is the preferred order.
A type of production or completion question. The response is usually a few words, phrases or sentences.
Information retained in the brain and retrievable from it over a brief span of time, generally thought to be in the order of seconds. A commonly cited capacity is 7 ± 2 elements. In contrast, long-term memory can hold an indefinite amount of information.
A computer program that imitates a physical process or object by responding to the data input by users and changing conditions as though it were the process or object itself. It lets learners practice in a realistic situation without risks or excessive costs involved in using real equipment, e.g., a flight simulator.
In FKA’s Instructional Systems Design Methodology, a skill is the smallest action required to perform an ability.
SKILL AND KNOWLEDGE
One of eight performance factors; insufficient skill and knowledge can result in poor performance. If this is the factor causing the poor performance, learning is the solution for improving it.
SKILL AND KNOWLEDGE ITEM
In FKA’s Instructional Systems Design Methodology, a skill and knowledge item is the lowest level of detail in the Model of Performance (MoP).
SMALL GROUP EXERCISE
A presentation or application method where the learners are divided into groups, typically 3-5. Each group has a limited amount of time and specific objectives to meet within the exercise. Specific guidelines and structure are built into the exercise to make it as meaningful as possible to each learner.
The deliverable at the end of the Identify Needs sub-phase of FKA’s Needs Identification phase. This document identifies the business needs and performance goals (what should be), describes the current performance problems (what is), identifies the performance factor(s) causing poor performance and recommends the best solution(s). Learning is only one possible solution.
See Kinesthetic Space.
Alternates three short, intensely focused periods of learning with two 10-minute breaks during which distractor activities such as physical activities are performed by the learners. The breaks give the brain time to process information, and repetition of material in multiple learning sessions aids in creating a permanent memory. It is attributed to the work of R. Douglas Fields in Scientific American in 2005 and Paul Kelley in Making Minds in 2008.
Learning new content is spread out over time, as opposed to studying the same amount of content in a single event. Clark Quinn showed graphically that the learning phase takes longer, but the forgetting curve that follows the learning program is much less steep. In other words, retention is greatly improved.
STAGES OF LEARNING
In 1982 William Howell described the four stages of competence:
• Unconscious Incompetence – “I don’t know that I don’t know something.”
• Conscious Incompetence – “I know that I don’t know something.”
• Conscious Competence – “I have learned something, but I have to think about it as I do it.”
• Unconscious Competence – “I know something so well that I don’t have to think about it.”
STAND ALONE JOB AID
A job aid that replaces the need for a formal learning program for an ability or component. This instructional strategy is usually selected when the ability is performed infrequently, is straightforward and there are no health or safety risks.
(1) The level of proficiency or requirements (quantity or quality) that must be met during performance.
(2) One of five parameters used to describe an ability; they are identified in the Model of Performance.
(3) One of eight performance factors; if standards are missing or unknown, the level of performance may vary across the population.
In statistics, a result is called “significant” if it is unlikely to have occurred by chance alone. “A statistically significant difference” simply means there is statistical evidence of the difference; it does NOT mean that the difference is necessarily large, important or significant in the usual sense of the word.
Popular levels of significance are 5%, 1% and 0.1%. If a test of significance gives a p-value lower than the required level of significance it is informally referred to as “statistically significant”. For example, if someone argues that “there’s only one chance in a thousand this could have happened by coincidence” they are implying a 0.1% level of statistical significance. The lower the significance level, the stronger the evidence.
The initial question or statement in a multiple-choice item. It may be text only or include a figure, chart, graph or other image.
In FKA’s Instructional Systems Design Methodology, a step is the second level of breakdown of an ability. This level is only used if it is helpful to group skill and knowledge items into meaningful clusters.
A cue, event, signal or situation to which a response must be made.
This is the second stage in Bruce Tuckman’s model of group development: Forming, Storming, Norming, Performing and Adjourning. During Storming, the members of the group jockey for control and influence and establish their roles. See also Forming, Norming, Performing and Adjourning.
The detailed plan or blueprint for each screen/page in an online course. It includes:
• All navigation and menu instructions
• The onscreen text
• The onscreen images (static or streaming)
• The audio script
• All questions to be asked along with the right answers, scoring instructions and feedback statements
• Any optional content
• Any links to be included
• Detailed instructions for the programmer
An interview driven by a list of questions determined beforehand. The interviewer must not deviate from or alter the questions in any way.
In FKA’s Instructional Systems Design Methodology, the sub-criterion test is the test at the end of a lesson. It should be designed to assess whether or not the lesson objective has been achieved.
SUBJECT MATTER EXPERT (SME)
Individuals selected for their expertise in an area. They can be used throughout the entire instructional design cycle to provide content, develop content, validate deliverables and deliver the learning program.
SUMMATIVE EVALUATION (OR ASSESSMENT)
The goal of summative assessment (or evaluation) is to evaluate learning at the end of an instructional unit and to determine if the learning objectives have been met. The resulting data provides valuable information about learners’ preparedness to perform after the training. See also Formative Assessment.
SUMMATIVE EVALUATION (OF PROGRAMS)
Evaluates the results or outcomes of a program. It is concerned with a program’s overall effectiveness, not assigning marks/grades/scores to individual learners.
One of FKA’s performance parameters. Support identifies any human support accessible during the performance of the ability or component.
An evaluation form used to gather data.
A method of communicating where those taking part are connected in real time. Examples in the workplace are virtual meetings and video conferencing. In e-learning, it is an event in which all of the participants are online at the same time and communicating with one another, i.e., facilitated e-learning.
A presentation method in which a small group of learners makes a cooperative presentation to the rest of the group.
SYSTEMATIC LEARNING PROCESS
Model which breaks a unit of content into: the Presentation of information to the learners,
an Application for the learners to practice, and the Feedback given to the learner. Also referred to as PAF.
The exact group of people at whom the learning program is aimed or targeted.
One of three behaviors exhibited by individuals in a group. Task-oriented behaviors contribute to the accomplishment of the task component, e.g., initiating, seeking, giving, elaborating, summarizing, coordinating and testing. See also Maintenance Behavior and Self-Oriented Behavior.
Johnson Graduate School of Management, Cornell University
An individual skill or knowledge item included in the learning program.
A pre-formatted form that simplifies the entry of data and speeds up development time, e.g., templates for lesson plans, storyboards, various question types.
An evaluative device or procedure in which a sample of the test-takers’ skill and knowledge in a specified domain is obtained and scored using a standardized process. Types of tests include:
Measures the degree to which the following three components are aligned: the learning objectives, the learning opportunities to achieve these objectives, and the method used to assess attainment of the learning objectives.
TEST BLUEPRINT OR TEST SPECIFICATIONS
Defines in detail the structure for the test forms: content areas, cognitive levels, format of items and responses, number or proportion of questions by content area, etc.
Includes purpose of the test, intended audience and other background information.
The principle that every test-taker should be assessed in an equitable way. Fair tests should be free of bias based on characteristics such as race, religion, gender or age.
The degree to which the test resembles the actual required performance.
TEST FOR UNDERSTANDING (TFU)
Questions asked during Presentation to ensure learners understand before they move to the planned Application. Sometimes called Knowledge Checks.
Measures the ability of a test to produce consistent scores or classifications. It is one of the most important criteria of the quality of a test. There are five types of reliability:
• Decision consistency
• Alternate forms reliability
• Test-retest reliability
• Internal consistency
• Interrater reliability
Results can only be considered reliable if the sample size used is large enough.
Estimate of the extent to which a test can provide consistent, stable test scores for the same group of test-takers across time.
The procedures designed to prevent cheating. Access to the content must be carefully restricted during the development, scoring and analysis of the test. A secure test is not for publication in any form or in any venue.
The extent to which test-takers know beforehand: the true purpose of the test, what will be tested, how it will be tested, the criteria for success and how individual questions will be scored.
The extent to which a test actually measures what it is supposed to measure. One of the most important criteria of the quality of a test and is dependent on reliability. A test cannot be valid if it is not reliable. There are three forms of validity:
• Content validity
• Concurrent validity
• Predictive validity
Since validity requires reliability and proving reliability requires data from an adequate sample size, the validity of results is also based on collecting data from a large enough sample.
A question that asks about the content itself. For example:
• The definition or illustration of…
• The reason behind…
• The relationship between…
• The differences/similarities between…
TOOLS AND REFERENCES
One of FKA’s performance parameters. These are devices, implements or written material used during performance of an ability or component. Job Aids are listed separately.
An input device layered on top of the visual display (screen) that lets the user make choices by touching icons or graphical buttons on the screen. It works by sensing the position and movements of the user’s finger(s) and passing that data to the running program.
TRAINING NEEDS ANALYSIS
An investigation to define, plan and cost-justify the best learning program for a given situation.
Any planned activity following a formal learning program that reinforces the skills and knowledge acquired during that program. A transfer activity reinforces on the job what was learned in the program, while a bridging activity moves the learner from the end-of-learning performance level to the required job performance level.
A set of activities developed to ensure all skills and knowledge learned during a formal learning program are transferred back to the job. It should include: a plan to reduce or eliminate barriers to the transfer; the roles and responsibilities of the learners, the learners’ managers and the learning organization. A transfer strategy helps learners transfer what was learned in the program to the job while a bridging strategy helps learners move from the end-of-learning performance level to the required job performance level.
See Test Transparency.
See Binary Choice Item.
(1) Instruction given to learners individually or in small groups by an instructor or ‘tutor’.
(2) A computer-based, interactive method of learning. The computer presents new concepts and skills through interactive text, illustrations, descriptions, questions and problems. Information is sequenced to build on previously learned concepts, and often provides feedback and guidance. One of the most common is a tutorial that provides instruction on the use of a computer system or software.
One type of coaching activity. In the workplace environment, tutoring is the process of teaching the coachee how to perform specific tasks. When done by a peer it is called peer-tutoring.
UNPROCESSED TEACHING POINT
A weak learning interaction with a test for understanding question in which the learner can see and copy the answer from the presentation. The learner does not need to process the information—this weakness reduces the probability of retention.
An interview where questions are dependent on the content of the interviewee’s responses. “Go with the flow” so to speak. Usually conducted by a subject matter expert.
UNTESTED TEACHING POINT
A weak learning interaction in which learners are not expected to give a response to legitimate teaching points. It consists of presentation without application, thus the total responsibility for retention lies with the learner.
The activities performed throughout FKA’s Instructional Systems Design Methodology to assure quality and manage risk throughout the project. It includes review and sign-off of deliverables by the client and testing with learners.
See Test Validity.
Any communication expressed in words, tone, pitch, inflection and emphasis.
The online environment in which a facilitated e-learning lesson takes place. An instructor/facilitator guides the learning for a small number of participants (8-12 participants is ideal). Making full use of the technical capabilities and the instructor’s/facilitator’s skill, participants can be kept engaged in meaningful activities.
Acronym for: Variety, Interaction, Visuals and Examples. See Motivation and Interactive Instruction.
FKA uses this term to identify a presentation which is conducted over the Internet and may involve hundreds of participants at the same time. In general the transmission of information moves in one direction only (from presenter to participants) and minimal engagement is possible for participants (polls, chat, Q&A, emoticons).
WORK ENVIRONMENT NEEDS
Any processes, systems, structures or conditions in the workplace that impact favorably or unfavorably on performance. They are often items, which need to be reviewed and changed to enable performance objectives to be met.
Robinson, D.G., and Robinson, J.C. Performance Consulting Moving Beyond Training
Acronym for “What’s in it for me?” It is the motivation for the learner.
A Web application that lets all users add and edit the content, e.g., wikipedia.com.