Home : View Projects : Sharing Information / Evaluation

Evaluation

Formative

Formative evaluation must be performed on the delivery system as well as the content. Usability testing of the environment needs to occur early on in the project to prevent massive patching at the end of the project. We check to see if the interface and the navigational flow are comfortable and intuitive to the learner.

Later in the project, once content is being produced, we need to have reviews to check accuracy of content, grammer, and presentation style.

We use tools embedded in the courseware to gather reviewer feedback in a database. This allows us to combine the comments of multiple reviewers and to prioritize the entries on a report for the programming staff to address.

Summative

Once the courseware is in the field, we like to hear from the learners about what is good or bad. This information is useful for updating the courseware for accuracy of content and improved usability.

As with the formative evaluation, we use embedded tools to gather all relevant information about student type, learning object, objective, location in content, etc so that the learner need only enter his or her comment. This data can be captured in distributed databases and filtered to one database for a holistic review.

Additionally, through the courseware, we can randomly administer online surveys soliciting information from the learner.

In systems that track student progress with individual objectives, we also look for high failure rates that are indicative of poor instructional content or interaction functionality. These systems also allows us to tie performance in the classroom to performance on the job and/or organizational effects if the organization keeps such data.

Home : View Projects : Sharing Information / Evaluation