Why and how organizations of different sizes evaluate the transfer of training and use technology to capture, store, and analyze the transfer of training data: A sequential explanatory mixed-method study
Date
2025
Authors
Contributor
Advisor
Department
Instructor
Depositor
Speaker
Researcher
Consultant
Interviewer
Narrator
Transcriber
Annotator
Journal Title
Journal ISSN
Volume Title
Publisher
Volume
Number/Issue
Starting Page
Ending Page
Alternative Title
Abstract
This study investigated why organizations value training transfer evaluations, the role of technology in these evaluations, and the influence of organizational size on implementation. Despite substantial investments in training, organizations often lack evidence of their effectiveness, as measuring training outcomes presents challenges. A sequential explanatory mixed-methods approach was employed, and the study collected quantitative data from ATD and ISPI members alongside qualitative insights from semi-structured interviews. Analysis through SPSS and thematic analysis revealed that, although organizations value advanced evaluations (Levels 3 and 4 of Kirkpatrick’s Model), they primarily implement lower-level evaluations (Levels 1 and 2) due to barriers such as difficulty isolating the impact of training and limited managerial skills in evaluating job performance. The findings underscore the central role of technology in training transfer evaluations, with Learning Management systems as the primary tool for data collection and storage, learner surveys for analysis, and business metrics for impact assessment. A notable gap exists between large and small organizations, where larger entities tend to leverage more advanced technologies.In contrast, smaller ones often rely on manual methods, indicating a significant technology divide. The study highlights that while organizations understand the importance of measuring behavioral changes and business impact, they encounter limitations due to resource and expertise constraints. This research suggests that higher-level evaluations are critical for capturing training effectiveness and guiding resource allocation. Future research should build upon these findings by addressing compliance issues, exploring diverse contexts, and evaluating the role of emerging technologies in enhancing training evaluation practices across various organizational types.
Description
Keywords
Educational technology, Dynamic Transfer, Evaluation, Evaluation Models, Evaluation Technologies, Transfer of Training
Citation
Extent
404 pages
Format
Geographic Location
Time Period
Related To
Related To (URI)
Table of Contents
Rights
All UHM dissertations and theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission from the copyright owner.
Rights Holder
Local Contexts
Collections
Email libraryada-l@lists.hawaii.edu if you need this content in ADA-compliant format.