Assessing the Design of a Learning Program: Part 2

    In part 1 of this blog we identified how we define an “effective design” and introduced our Learning Evaluation form and discussed how it can be used with the designers’ clients to confirm they have a clear understanding of their clients’ expectations for their learning programs.

    Part 1 also described the first step in the two-step assessment process which is to use the Learning Evaluation form to record the reviewer’s observations.

    Step 2 – The Learning Evaluation Scoring Spreadsheet

    In the second step of the process involves entering the observation data recorded in step 1 into a spreadsheet version (see excerpt in Figure 2) of the Learning Evaluation form, called the Learning Evaluation Scoring spreadsheet.

    The spreadsheet:

    1. Let’s you assign a weighting to each factor being assessed
    2. Let’s you objectively rate the observation made for each factor
    3. Computes an overall score out of 100
    4. Flags the areas with the weakest scores

    Figure 1 – Learning Evaluation Scoring
    Learning Evaluation Scoring Spreadsheet 1

    A quick scan of the scoring on the sample spreadsheet below (see Figure 3) indicates the course design score of 59% is below the acceptance limit of 75%, suggesting this course would benefit from some enhancements to its design. As you look down the spreadsheet at the details you can see where to invest the re-design effort. If you look at the assessment of the course introduction it is flagged as an area for improvement and within the Introduction, the weakest areas are Motivation and Overview. Also, the objectives could be improved.

    Figure 3 – Learning Evaluation Scoring: Weaknesses
    Learning Evaluation Scoring Spreadsheet 2

    You can use the scoring in the spreadsheet to help you prepare a course assessment report that identifies strengths, weaknesses and makes recommendations for improvements. The two assessment tools (observation collection form, scoring spreadsheet) can be included as attachments to your report.

    Conclusion

    I do not want to ‘over-sell’ this as a scientific method of assessing instructional design. Think of it as a job aid to help you structure your review process and record your observations. The value of this approach increases as you complete more reviews and are able to make comparative observations. It is motivational to see your individual design scores increase with each project and together your L&D team you will have some evidence to support the claim that your team is producing “effective” designs.

    Contact FKA if you want more information about our Learning



    Jim Sweezie
    VP Research and Product Development

    Biography


    Close Menu