The current issue and full text archive of this journal is available at www.emeraldinsight.com/0309-0590.htm

PERSPECTIVE ON PRACTICE

Evaluation of training in organisations: a proposal for an integrated model Pilar Pineda Universidad Auto´noma de Barcelona, Barcelona, Spain

Evaluation of training

673 Received 27 July 2009 Revised 2 November 2009 Accepted 23 March 2010

Abstract Purpose – Training is a key strategy for human resources development and in achieving organisational objectives. Organisations and public authorities invest large amounts of resources in training, but rarely have the data to show the results of that investment. Only a few organisations evaluate training in depth due to the difficulty involved and the lack of valid instruments and viable models. The purpose of this paper is to present an evaluation model that has been successfully applied in the Spanish context that integrates all training dimensions and effects, to act as a global tool for organisations. The model analyses satisfaction, learning, pedagogical aspects, transfer, impact and profitability of training, and is therefore a global model. Design/methodology/approach – The paper’s approach is theoretical, and the methodology used involves a review of previous evaluation models and their improvement by comparing their application in practice. Findings – An analysis of pedagogical aspects enables training professionals to improve training quality, as they are able to identify any weaknesses in elements, such as training design, needs analysis and training implementation, and improve on them. In fact, the quality of these elements depends entirely on the training professional. The improvement of pedagogical aspects, as a result of their evaluation, increases training quality and the results of training in organisations that can be identified by evaluating the other levels of the model, particularly the aspects of learning, transfer and impact. Practical implications – The author has applied the model successfully in several public and private organisations, in industry and in the services sector, which demonstrates its usefulness and viability in evaluating the results of training. Therefore, this evaluation model has interesting and practical implications, as a useful tool for training managers for evaluating training results, as well as providing a global simplified approach to the complex evaluation function. Originality/value – The originality of this evaluation model lies in its focus on a key and novel aspect – i.e. the pedagogical dimension, providing an integrated tool that can be easily adapted to any organisation. Keywords Human resource development, Training, Training evaluation Paper type Conceptual paper

Why evaluate training? In a today’s changing global context, both individual and collective skills are the most important assets for organisations, and determine their productivity, competitiveness and ability to adapt and be proactive when faced with an uncertain environment. Training is a key strategy for generating skills in people, since it enables them to both learn and unlearn skills – in other words, to acquire

Journal of European Industrial Training Vol. 34 No. 7, 2010 pp. 673-693 q Emerald Group Publishing Limited 0309-0590 DOI 10.1108/03090591011070789

JEIT 34,7

674

new skills and change inappropriate skills. This is why investment in training is high; in Europe, according to the latest Cranfield Study Executive Report, in 2005 the average investment in training was 2.99 per cent of payroll, and in Spain it was 1.95 per cent of payroll (ESADE, 2005). In order for training to be considered an investment, it must be held accountable like other investments made by the organisation, and must demonstrate that the decisions and actions taken are relevant and profitable. In other words, the actual contribution made by training to the organisation’s results must be ascertained. Evaluation is the key tool for this purpose. Thus, the evaluation of training is directly linked with the organisation’s quality systems, as the information it provides enables training results to be identified, possible deficiencies to be analysed and improvements to be introduced to optimise the training function as a whole (Holton, 1996; Kirkpatrick, 1998). Training professionals recognise that it is important and necessary to evaluate training, but such recognition does not translate into the implementation of rigorous evaluation systems that indicate the results of training. The CVTS-2 survey (Eurostat, 2009) indicates that the status of training evaluation in Spain is very similar to what occurs in Europe. Table I describes the percentage of companies that evaluate the effect of continuing training – that is, the results obtained – according to the evaluation method used and the frequency of the evaluations. It is noted that seven out of ten companies evaluate some aspect of training: however, the percentages decrease when it comes to evaluating training results and application to the workplace, which occurs in only two out of every ten businesses. Evaluation of training outcomes is often found in large companies. In Spain, 90 per cent of companies have fewer than 50 employees (FTFE-D’ALEPH, 2006; Pineda, 2007), so only a few companies evaluate the results of training in a systematic and rigorous way. This paper aims to offer a training evaluation model, thereby helping the training professional to design and implement rigorous and coherent training evaluation processes that enable the entire training function to be optimised. What does training evaluation mean? The concept of evaluation We understand the evaluation of training in organisations to mean the analysis of the total value of a training system or action in both social and financial terms, in order to obtain information on the achievement of its objectives and the overall cost-benefit ratio of training, which in turn guides decision-making. Evaluation involves collecting information on the results obtained in order to analyse and evaluate them and facilitate the optimisation of training in the future. This optimising function is precisely what links evaluation to quality. Thus, evaluation focuses on determining the extent to which training has responded to the needs of the organisation and its translation in terms of impact and profitability. Therefore, evaluating training involves detecting and analysing the results obtained from a specific perspective: the perspective of the contribution of training to organisational performance and the return on the investment made (Holton, 1996; Kirkpatrick, 1998).

13 14 18 11 33

24 18 19 10 36

22 49

24 25

20

43 76

56 61

58

8 33

18 14

24

22 33

14 18

11

Note: Figures shown are percentages Source: Authors’ calculations based on statistics from Eurostat (2009) (data obtained from the CVTS-3, 2005 survey)

Measurement of participant satisfaction Administration of tests or examinations to verify new skills Assessment of performance in the workplace Measurement of the implementation of new skills to the workplace All the evaluation methods

26 52

27 28

21

46 75

58 60

56

EE-27 and Norway Spain Always Often Occasionally All frequencies Always Often Occasionally All frequencies

Evaluation of training

675

Table I. Companies that evaluate the effect of training according to method and frequency

JEIT 34,7

676

Evaluation and planning The evaluation of training in the organisation is one more phase in the overall planning process, and as such influences and is influenced by other elements that constitute planning. Evaluation, in order to be truly effective, must be integrated into the planning process, occurring throughout the process in its various forms (Meignant, 1997). Figure 1 shows how evaluation is integrated into the training planning process. As can be seen, a very close link is established between the planning of training and evaluation, which requires the design and implementation of both functions to run parallel to each other and simultaneously. Thus, in order for the training evaluation to carry out the functions mentioned above and to be truly effective, it should be designed at the same time as the training is being planned. This makes the information provided by the evaluation in its various applications extremely useful for planning purposes and assists with the decision-making required to improve the training system from the bottom up. How to prepare a plan for evaluating training There are several models of training evaluation that organise the process, provide guidelines for the content and outline the phases of its implementation. Some of them are based on contributions from the classic master of evaluation, Donald Kirkpatrick, and his famous four levels: (1) reaction; (2) learning; (3) behaviour; and (4) results. The Kirkpatrick model has several weaknesses (Holton, 1996), which successive models have tried to overcome – those of Phillips (1994), Wade (1994), Barzucchetti andClaude (1995), Swanson (1996), Holton (2000), among others.

Figure 1. Training and evaluation planning

These models overlook a key dimension of evaluation, precisely that which allows us to improve training and make it more effective: the pedagogical dimension. Moreover, research carried out (Pineda, 2003, 2007) and my experience as a consultant in various organisations, point to the major constraints presented by evaluation practices that are being implemented, as well as the enormous difficulties faced by the training professional in order to conduct a minimally effective evaluation. This reality has prompted us to develop an evaluation model that overcomes the limitations presented by organisational evaluation practices, and which offers a comprehensive and systematic evaluation proposal: the holistic training evaluation model, which has been successfully implemented in various organisations. Our model rests on five basic questions that affect evaluation. It responds to these in an integrated manner and draws up evaluation strategies that cover the entire training process. Figure 2 shows the five questions and their sequencing, which leads to the evaluation process. Each question brings together a group of elements that need to be taken into account when designing a training evaluation process. The following answers are provided as a response to each question.

Evaluation of training

677

Evaluation addressee The first step is to identify the purpose and focus of the evaluation process that we design. In organisational contexts customer orientation is a constant, and the same can be said of an evaluation, which determines the purpose of the whole process, its orientation and components. The recipient can be anything from the company itself to the training department, the trainer, the participant, internal and/or external clients or social agents. However, the most common beneficiary when developing evaluation processes is the organisation as a whole, including all its members. Evaluation elements The next step is to identify what factors and aspects of the training we want to evaluate. Here we identify six basic levels of evaluation that can be broken down into sub-levels and then specific elements. They are:

Figure 2. Training evaluation process

JEIT 34,7

678

(1) (2) (3) (4) (5) (6)

Participant satisfaction with the training. Learning achieved by the participants. Pedagogical coherence of the training process. Transfer of training to the workplace. Impact of training on organisational goals. Profitability of training for the organisation.

In the next section each of these levels is detailed, specifying their content, stages and the most suitable instruments for their evaluation. Thus, the components of our model are described. Evaluation agents The agents making judgements regarding the training should be all those affected by it, from the participant to the organisation’s management, including the trainer, the training department, the senior participant, his/her colleagues, customers, etc. Evaluation periods There are four basic periods when training evaluation can be undertaken. These four periods correspond to the types of evaluation presented earlier: (1) before starting the training – initial or diagnostic evaluation; (2) during the training – procedural or formative evaluation; (3) at the end of the training – final or concluding evaluation; and (4) some time after completing the training – deferred evaluation or an evaluation of transference and impact. Evaluation instruments There is a wide range of possible instruments and tools with which to evaluate training. We may use questionnaires, individual and group interviews, controls and final tests, activities and learning products, systematic observations, demonstrations, evaluation reports, qualitative and quantitative impact indicators and profitability calculations. The holistic evaluation model stems from comparing the responses to the five basic questions presented in Figure 2 and integrating them into a comprehensive whole, thus enabling the design of an effective evaluation plan tailored to each training situation. The holistic model that we propose allows for an integrated response to these questions, comparing responses and proposing strategies for effective evaluation. Table II shows an example of the evaluation plan drawn up according to the holistic model; it links the elements evaluated with the agent who evaluates and with the tool used. Our model allows us to analyse all the variables that affect the evaluation in an integrated manner, and thereby design comprehensive and coherent evaluation processes, which are tailored to each situation; ultimately, evaluation processes that are effective and efficient in terms of available resources. The introduction of this comprehensive training evaluation model within organisations can prove to be a very useful tool for the training department; a tool that facilitates the complex task of

– –

Questionnaire F Self-evaluation TA – –

Pedagogical appropriateness

Transfer

Impact

Profitability



Exercises Test Learning activities I-P-F Self-evaluation I-P-F Observation Interview Report I-F Interview Observation TA Impact indicators TA Profitability calculation TA

Interview Report F Report F

Who evaluates? Training department











Observation Questionnaire TA – –





Impact indicators TA Profitability calculation TA

Management

Supervisor

Notes: I, initial evaluation; P, process evaluation; F, final evaluation; TA, some time after training evaluation

Learning

Observation P-F

Questionnaire Collective valuation P-F Self-evaluation I-P-F

Satisfaction

Trainer

Participant

What is evaluated?

Evaluation of training

679

Table II. Training evaluation plan

JEIT 34,7

680

evaluation and that helps overcome the many difficulties that often surround evaluations. This model has been implemented successfully in various organisations, both public and private. The model has been implemented as a consultancy process, and tools and knowledge have been created to continue the evaluation of training in the future. The lessons learned from the implementation of the model are discussed in the conclusions. Next we analyse each of the evaluation levels in order to describe the components of our model. The analysis offers a detailed look at its contents and the strategies that help develop them. Levels of training evaluation Level 1: Participant satisfaction The first level of evaluation is to ascertain the participants’ opinions on the training received and their level of satisfaction in this regard. The aspects that usually make up this level of the evaluation, and on which the participants’ opinions are sought are as follows: . the appropriateness of training regarding their needs and expectations; . achieving the goals set by the training; . quality of content – its suitability, level, depth, interest, ratio between theory and practice; . the quality of the methods and techniques used – suitability, variety, enjoyment; . the quality of pedagogical resources – documents, audiovisual materials, projection equipment; . the trainer – his/her knowledge and skills at a pedagogical level, communication, steering the groups; . the group climate and level of participation; . the quality of other resources that come into play, such as classrooms and spaces, services (e.g. coffee, lunch), timetables, information received; . the scope of applying what has been learned to the workplace; and . their suggestions and proposals for improvement. Virtually all organisations evaluate this level, which is usually carried out through a questionnaire that participants complete just after the training has ended. However, the satisfaction of participants can also be assessed during training, with the intention of introducing improvements during the process as a result of participants’ opinions. One can also use other evaluation instruments both during and at the end of training, such as: . Informal or spontaneous evaluation, through group questioning or by questioning some of the participants individually about their satisfaction regarding the above-mentioned aspects. . Collective assessment, applying group techniques that organise the participants in small discussion groups to think about one or more of the items outlined above. The assessment can be conducted by the trainer or by a person from the training department, and allows the gathering of consensus views on the level of satisfaction with training.

.

.

Participant observation by the trainer, which can lead to the development of a report that is delivered to the training department. Interviews conducted by the training department with some participants selected at random and/or the trainer.

The professional who prepares the evaluation plan will select one or more of these instruments depending on the characteristics of the training to be assessed, the existing tradition in the organisation and available resources. The most effective formula is perhaps the combination of the questionnaire with collective assessment and the preparation of a report by the trainer. The evaluation of this level has several limitations that should be discussed. Firstly, one should emphasise the great sensitivity of the results to the climate created during training; thus, one can have a very high level of satisfaction with inadequate training activities that are nevertheless led by a trainer with great social and communication skills, and vice versa. Secondly, this level of evaluation provides the participant’s view of the training, but does not report on the actual learning by the participant or on the application of such learning in the workplace, and even less about the impact that all this will have on the organisation. Therefore, this level of assessment should be followed by the next levels as, by itself, it provides useful, but insufficient, information on the results of training. Thus, organisations that only assess participant satisfaction, of which unfortunately there are still many, do not in fact evaluate training but merely reflect the opinion of its more immediate clients. The usefulness of this will depend on how the information collected is put to use and how it is linked with the results from the other evaluation levels. Level 2: Learning achieved by the participant The second level of evaluation focuses on identifying what participants have learnt by the end of the training. Evaluation at this level presupposes the existence of operational and measurable training objectives, which act as an evaluation reference. In other words, as a norm from which to value the learning achieved. But one must also bear in mind that training can generate unexpected learning, which as a result is not reflected in the proposed goals. The evaluation system should be designed to allow this unforeseen learning to be collected, which is sometimes of great value to the organisation and the individuals. The evaluation of learning takes place primarily in three stages: (1) At the beginning of training in order to determine the entrance level of the participants. This diagnostic evaluation, if done in advance, allows the entire design of training activities to be tailored to meet the real needs of participants, thereby increasing the effectiveness of the training. (2) During training, in order to detect the pace of learning of participants and introduce improvements to help them reach the expected level of learning. (3) At the end of training in order to assess the results achieved by the participants, namely the learning achieved thanks to the training. The instruments used are highly dependent on the type of training in question and the culture of evaluation present within each organisation. If training focuses on the transmission of knowledge, the most appropriate instrument is the classic test or

Evaluation of training

681

JEIT 34,7

682

examination, which applied before, during and after training, allows us to detect the level of learning of the participants. Nevertheless, if the training does not lead to obtaining a diploma or a promotion, people are very hesitant about these kinds of instruments and therefore only a few organisations make use of them. If the training is focused on acquiring skills or around skills development, the most appropriate instrument is a test of actual or simulated implementation, which can be executed in the three stages mentioned above. This type of tool also allows for the evaluation of the level of knowledge acquisition related to the skills learned. When training focuses on attitudes and behaviour, evaluation becomes complicated; performance tests assist in detecting the acquisitions made, but it is also useful to apply specific tools, such as attitude scales, which provide concrete data on new attitudes generated through the training. There is another very useful tool, both for the information it provides, as well as for the ease with which it may be applied, as it is always within the trainer’s reach: the learning activities. All the activities that the trainer carries out are geared towards the generation of learning among participants, but once they have been well implemented they can also be evaluated. In this way, for example, conducting an exercise on accounting shows the level of understanding among participants on the subject, or implementing a role-playing game shows the behaviour and attitudes that people have acquired. Training activities are an important tool to evaluate learning achieved and can be implemented in the three evaluation stages, although they would be more fruitful when applied during training. The information provided by these instruments can be complemented by the self-evaluation of participants regarding their own learning, conducted before, during and after training. The training department can produce a report that integrates and assesses all the information collected about the level of learning generated through training. The evaluation of learning is essential as it detects the immediate results of training and allows for further evaluation regarding the transfer of training to the workplace, which is what really interests the organisation. In fact, if we do not know what participants have learned, we cannot expect them to transfer anything to their workplace. Level 3: Pedagogical appropriateness This level is focused on determining the level of internal coherence of the training process from a pedagogical point of view. In other words, it investigates the pedagogical appropriateness in both the design and delivery of training in order to achieve the training objectives most effectively and efficiently. This evaluation level is specific to the model under consideration here and provides a clear pedagogical orientation, differentiating it from other evaluation models. Thus, the elements that are evaluated at this level are those that relate to the design and implementation of the training and its suitability for the target group. They are as follows: . Training objectives – Their relevance is analysed according to the need or needs expected to be met, their suitability at the level of the target group, their relevance, and the quality of their design and writing.

.

.

.

.

Content – Its relevance is determined in relation to the objectives, its relevance, appropriateness of its selection, its level of precision and structuring, and the balance between theoretical and practical content. Methodology – Its relevance is determined in relation to the objectives and content selected, the relevance of the methods and techniques prioritised, the presence and usefulness of practical methods, and the quality of application of the methodology. Human resources – The teaching skills of trainers are evaluated, both in terms of knowledge and practical experience as well as pedagogical skill and group management. Material and functional resources – Their appropriateness, relevance, and spatial quality are analysed, as well as furniture, pedagogical resources, timetables, and other material aspects related to training.

From the range of instruments that can be used to evaluate this level, we have selected those used most frequently and those that provide the most significant findings regarding the pedagogical coherence of training. They are as follows: . Participants’ questionnaire – The training department can develop a questionnaire to gather the participants’ views on the pedagogical coherence of the elements mentioned above, which can be carried out at the end of the training. Rather than developing a specific questionnaire, several items regarding this level of evaluation may be introduced into the questionnaire on satisfaction, which is aimed at the participants. Nevertheless, it should be noted that the information obtained through these items only provides the participants’ opinions on pedagogical appropriateness, and must therefore be compared with results obtained though the use of other instruments. . Trainer interview – The training department conducts an interview with the trainer to ascertain the pedagogical appropriateness of the design and the delivery of the training. The interview collects information on all the elements outlined above, and therefore takes place at several points: at the beginning of training to oversee and adapt the design, during training to monitor its implementation, and after training to assess the adequacy of the process undertaken. The interview provides very useful information and helps detect imbalances in order to improve the training. . Observation – Observation is conducted during the delivery of the training, and may be one of two types depending on the agent carrying it out. On the one hand, the trainer can conduct participant observation of the development of training activities as well as of his/her own performance. The information obtained can be drawn up in a report to be discussed in the final interview with the training department mentioned earlier. On the other hand, the training department can conduct a systematic observation of the development of training, using a recording system – for example checklist, video – and then subsequently analyse the information gathered. This type of observation, given its cost and difficulty, is usually reserved for those training activities that, for specific reasons, require a thorough assessment of their pedagogical appropriateness.

Evaluation of training

683

JEIT 34,7

684

.

Self-evaluation – The trainer conducts a self-evaluation of the development of training and his/her performance, which is reflected in a semi-structured document that is subsequently analysed by the training department.

These would be the main options for evaluating the pedagogical appropriateness of training. When drawing up its evaluation plan, each organisation should select the evaluation items, agents, timing and tools, depending on its needs and actual possibilities. This level of evaluation provides very useful information for the training department; it guarantees the adaptation of the training design to meet the needs of the organisation; it allows for the introduction of improvements during the training process and optimises subsequent applications. Level 4: Transfer This level is focused on detecting changes that take place in the workplace as a result of training. At Level 2 the learning achieved by the participants is identified, but what really matters to the organisation is not the learning itself, but rather the transfer of learning to the workplace, that is, how it translates into changes in the working behaviour of people. Thus, evaluating transfer means detecting whether the skills acquired through training are applied in the workplace and whether this is sustained over time. Even though transfer is what all training activities should pursue, achieving this goal is not always guaranteed and is sometimes not easy. There are several models that analyse transfer factors (Noe, 1986; Baldwin and Ford, 1988; Holton, 1998; Awoniyi et al., 2002; Clarke, 2002, 2005; Egan, 2004; Kontoghiorghes, 2004; Lim and Morris, 2006; IPDD, 2004; Shankar, 2006). Here we focus on those factors that depend on the training department and determine the possibility of evaluating the results. Training should be geared towards the transfer of the learning that it generates, and this should be reflected in both the design and the implementation and monitoring of training. Thus, training must begin with a detailed knowledge of the organisation’s needs and must be established within the operational objectives. These objectives will allow a subsequent evaluation of the changes experienced in people’s working behaviour: if we do not know the situation from the start, or the objectives, we cannot objectively determine the changes that have occurred. Furthermore, training needs to be implemented following a methodology that facilitates and enables transfer, in other words, a methodology which is practical, implementable, close to the reality of the job, and which includes strategies to guide and ensure subsequent transfer. Finally, training has to look at mechanisms for monitoring and maintaining transfer, mechanisms that should run parallel to the evaluation. In this way, the orientation of the training design towards transfer is the first requirement necessary for the achievement of transfer of training. But this also requires the active involvement of other key agents in addition to the trainer, such as the participant and his/her superiors and colleagues. These play a crucial role in both facilitating transfer as well as in its evaluation, ensuring the application of lessons learned, eliminating potential barriers and collecting information that will make the evaluation possible.

The evaluation of transfer thus involves several persons who all play a crucial role in its execution: . Trainers and training specialists design the evaluation system and drive and oversee its implementation. For this reason they should obtain the cooperation of other agents and negotiate their level of involvement in the evaluation of transfer. . The participants also play an important role through self-evaluating their transfer and assessing the potential barriers in their environment. . The participant’s supervisor or line manager is a key player as he/she knows the daily performance of co-workers in detail and can assess whether changes have been achieved through the training. . The participants’ colleagues and even customers can act as important agents at this stage of evaluation. Nevertheless, it is worth noting that the evaluations of these agents may be too highly subjective and thereby invalidate their opinions. The evaluation instruments that are used should address this issue and ensure the objectivity of the information collected. There are several instruments available to evaluate transfer. Depending on the type of training being evaluated and the characteristics of the organisational environment, the most efficient tools should be selected. The following instruments are used most frequently: . Performance observation – systematic for repetitive, participant tasks and for the more complex tasks – is an instrument which is slow to implement but provides very valid information. . The interview, either of the participants themselves or of their superiors and colleagues in whatever form: structured, informal, in person, by telephone, individual, group, etc. . Questionnaires, for the participants as well as their colleagues. The information gathered can complement and be compared against the findings obtained using the previous instruments. Questionnaires aimed at participants can allow them to self-evaluate their transfer and can lead to a self-assessment report. . Reports of superiors on the transfer detected, with detailed data on the results, the strengths and weaknesses, etc. . The action plans developed by participants at the end of training and reviewed periodically represent not only a useful guide for transfer but also serve as an interesting assessment tool. As regards the timing of an evaluation, it is advisable to wait between one and six months after completing the training in order to allow time for transfer to materialise and stabilise after the “post-training euphoria”. The most appropriate period depends on the type of learning generated by the training as well as its complexity: that is, the more complex and more numerous the skills acquired over a period of time, the more time will be needed for transfer and stabilisation. In any case, the maximum waiting time should not exceed six months so as to avoid forgetfulness, and the evaluation should be repeated periodically in order to assess – and enhance – the maintenance of transfer.

Evaluation of training

685

JEIT 34,7

Evaluating transfer is of crucial importance to the organisation as it demonstrates the contribution of training to the improved performance of individuals, as well as the benefits it brings to the organisation, in order to subsequently determine its impact and profitability. Thus, evaluating transfer is the first step towards providing thorough proof of the real value of training.

686

Level 5: Impact The impact of training is understood to mean the effect of certain training activities on an organisation, in terms of responding to the needs of training, problem-solving and contributing to the scope of the strategic objectives that the organisation has identified. Thus, the impact consists of changes due to learning attained through training and how the transfer of this learning into the workplace affects the department or area of the trained person as well as the organisation as a whole. The impact of training is thus conceived as the effects that training generates in the organisation, as a result of the use of the skills that participants have acquired through training. There are two types of effects: (1) qualitative or not translatable into economic terms; and (2) quantitative and translatable into monetary value. It is the latter that makes it possible to assess the profitability of training, which is addressed in the next level of evaluation. The impact assessment focuses on identifying the results and benefits that training brings to the organisation. We understand benefit to mean the increase in levels of usefulness or welfare associated with the increasing quantity of training acquired. The calculation of the benefits concentrates on measuring the effects of training by establishing impact indicators. An impact indicator is a unit of measurement to identify the concrete and tangible effects of training in the organisation (qualitative and quantitative). These indicators make it possible to identify, monitor developments and measure the actual impact that the training has generated in the organisation during a period of time. Impact indicators may be expressed in various terms: they can be expressed in quantities (numbers of purchases or numbers of products), as indices (of quality or of satisfaction), as periods (of delivery or of service provision) and as effects (materials used, human resources involved, etc.). There are two types of indicators: (1) economic, or hard indicators; and (2) qualitative, or soft indicators. Their characteristics are substantially different, if not conflicting. Hard indicators are: . easy to measure and quantify; . easy to translate into monetary value; . objective; . common in corporate data; . highly credible to management; and . barely present in training.

Examples of hard indicators include sales, turnover, number of customers, number of errors, etc. Soft indicators are: . difficult to measure and even more so to quantify; . difficult to translate into monetary values; . subjective; . unusual in corporate data; . scarcely credible to management; and . always present in training. Examples of soft indicators include motivation of the collaborators, suggestions made, working atmosphere, etc. The identification of valid indicators will allow training benefits to be calculated in a thorough and appropriate manner. Since this is the most difficult procedure in the evaluation process, we present a series of guidelines and suggestions that may facilitate identification and provide guarantees of success for the process as a whole: . It is necessary to follow a set of criteria when selecting impact indicators. The most significant are relevance, moderate cost, reliability, acceptability, reduced numbers and a low pollution index. . The impact indicators should be identified during the planning of training and should be directly linked with the training objectives as well as the objectives of the organisation. . All those affected by the impact evaluation must feel involved in the process and must participate actively in it. . It is extremely useful to classify the impact indicators according to the different types of training that are to be assessed; this facilitates the whole process and makes it more cost-effective. It is also appropriate to link economic indicators to the organisation’s operating statement. . It is necessary to specify the type of application of each indicator, in other words, the period, the agent, the source and the instrument to be used to measure it. The instruments most frequently used are observation and reports on the organisation’s results. . A follow-up of the evolution of the indicator will be carried out in order to establish a follow-up table to facilitate data collection. Impact assessment is also known as the assessment of organisational results (Waagen, 1998), understood to be the measurement and verification of the effects of training in relation to the attainment of the organisation’s objectives, or in other words, ascertaining the overall results of the training activities. The impact assessment is the most complex level of those that make up the model, but is at the same time the most interesting for training professionals and for the organisation as a whole, since it shows the effects and real value of training, and justifies the investments made.

Evaluation of training

687

JEIT 34,7

688

Level 6: Profitability The translation of training impact into economic terms enables a profitability index to be obtained, expressed by the return in monetary benefits generated by the investment made in training. Two procedures are followed for this purpose: (1) calculation of the costs involved; and (2) calculation of the profitability. Calculating costs. Cost calculation is the first step towards undertaking a training impact assessment, and focuses on identifying the costs involved in the training processes carried out by an organisation. There are different types and classifications of costs. Those most commonly used in the field of training for organisations are as follows: . direct costs – trainers, materials, spaces, per diems, etc.; . indirect costs – management, design, administration, communication, additional materials, participants’ salaries, etc.; and . overheads – general services of the organisation, such as utilities, cleaning, depreciation, etc. All these costs are generally classified into fixed and variable costs, a process that is very useful when preparing the training budget, and also useful when calculating the overall costs of various training activities. This calculation makes it possible to obtain the total costs and therefore the investment made in training, amounts to be used subsequently to calculate profitability. The calculation of costs is the simplest of the calculations involved in evaluating profitability as it merely involves collecting the data available in the organisation – usually found in the budgets and economic information relating to training – and adding it together in the required categories. Calculating profitability. Once the impact in terms of benefits (evaluation Level 5) and the cost of training have been obtained, profitability can be determined. Two procedures may be highlighted here: (1) the cost-benefit analysis; and (2) return on investment. Both aim to obtain a profitability figure and are therefore based on the costs and benefits involved in the training. The cost-benefit analysis seeks the net benefit of the training, for which purpose it compares the costs with the benefits using the following formula: total benefit 2 total costs ¼ net benefit: However, the return on investment, explained at length by Phillips (1994), calculates profitability by indicating the net profit gained on the investment made, in other words, by looking for a profitability index. The formula applied is as follows: ROI ¼

net benefits £ 100: cost

As can be seen, both methods for calculating profitability are based on a comparison of costs and benefits, and although they follow different processes they aim to identify the

profitability derived from the training activities conducted. This is a purely economic calculation, and therefore it leaves aside the qualitative impact, the importance of which was discussed above. Therefore, these results should be added to the non-economic results obtained from the benefits calculation. Nevertheless, the calculation of profitability alone is enormously helpful in making decisions about the levels of investment in training and provides data that are highly valued by the managing bodies of organisations.

Evaluation of training

689

A case study in the health sector The evaluation model has been applied successfully within different organizations in the Spanish context. Here a case study of the Catalan Government Health Department is presented. My research team was in charge of evaluating the effectiveness of the Training Plan for the Rational Use of Medicines. This plan was aimed at all the doctors in the region and aimed to improve their practice with pharmaceutical prescriptions, so increasing the efficacy of the public resources allocated to health. A total of 153 training programs were carried out between April and December 2007. The evaluation focused on the satisfaction and the learning acquired by the doctors who participated in the training, the level of transfer of the skills acquired to the workplace and the impact of the training on health centres. The evaluation of the training plan on the use of medicines was structured in two strategic actions with a different working methodology (see Figure 3): (1) Generic evaluation – This evaluated the satisfaction and transfer of a representative sample of actions by means of two generic questionnaires. (2) Specific evaluation – This evaluated the satisfaction, learning, pedagogical adequacy, transfer and impact of five training programs using questionnaires, interviews, evaluation reports and analysis of impact indicators. In the generic evaluation, the questionnaire was administered through the use of an online application. Using an opportunity sampling strategy, a total of 351 questionnaires was obtained. In the specific evaluation, ten reports were collected from trainers and those responsible for training, which included data on learning and educational value; 15 interviews were conducted with the managers of trainees, to evaluate the transfer and impact of training, and impact indicators in the organization were analysed.

Figure 3. Levels and instruments of each evaluation strategy

JEIT 34,7

690

The results obtained showed that the trainees made a positive evaluation of the training carried out. They considered that the training was useful and also that it met their expectations, to an extent of 3.82 on a scale ranging from 0 to 5. The average level of learning acquired was 3.36, and the level of application of the training had a score of 3.1 (scale 0-5). Most of the trainees introduced improvements in their professional behaviour, especially in the tasks concerning the selection and use of medicines. This was one of the main objectives of the training, and therefore the evaluation showed that the training was effective in modifying the professional performance of doctors in the use of medicines. The training had an impact on the organisation, but less impact than desired, as a result of the intervention of other cultural and organisational variables unrelated to training – the positive aspect being that the evaluation allowed from the identification of these variables. These results agree with those obtained through the qualitative methods applied in the specific evaluation of five training activities, which demonstrates the validity of the model, its internal coherence and its usefulness for evaluating training.

Conclusions: strategies to improve the evaluation of training in organisations Our research and consulting experience shows us that the manner in which organisations evaluate their training is far from what would be desirable in order for evaluation to really serve as a tool for optimising training quality. This precarious situation is due to the many difficulties involved in evaluating training and a failure to comply with certain basic requirements of existing evaluation systems. The main difficulties involved in evaluating training are: . the isolation of the training results among the overall processes and variables that occur in organisations; . the problem of measuring numerous results and, especially, their translation into economic terms; . the absence of appropriate tools and difficulty involved in accessing certain information in the organisation; . the resources required to design and implement this type of evaluation, which raise the cost: . lack of preparation of training professionals, who feel ill-equipped to deal with the complexity that training evaluation entails; and . the lack of support from management bodies who do not consider it necessary to allocate resources to assess training results, and who prioritise training quantity over quality. Any evaluation plan must consider a series of requirements that ensure that the evaluation is carried out in the most effective manner. Compliance with these requirements is the first step towards overcoming the difficulties outlined earlier. Meignant (1997) and Barzucchetti and Claude (1995) propose some basic rules for evaluation, to which we have added several strategies, resulting in the following requirements:

.

.

.

.

.

.

Evaluation means comparing results with a previously defined reference. This reference is the situation that is expected to be achieved due to the training and is often expressed in the form of objectives. Therefore, the objectives must be well defined, observable and measurable. The evaluation plan should be based on a detailed analysis of the existing material and functional possibilities for its implementation. Thus, the available information should be considered, as well as the tools and resources at hand, the time required and the approximate costs involved. Ultimately, the evaluation plan must be feasible and realistic. The evaluation plan must be accepted by everyone involved in the evaluation, from participants to managers. If the agents do not accept the proposed plan it will be a failure – participation in evaluation is a guarantee for success (Russ-Eft and Preskill, 2008). Therefore, it is better to design a simple evaluation plan agreed upon by all rather than a complex plan that does not receive support from the organisation. Ensure that training is the main cause of the results obtained, and isolate the possible effects of other factors in the organisation. It is advisable not to be too over-confident if good results are obtained and analyse the real contribution of training to the success. In the case study presented, the evaluation helped isolate these variables and explain the reasons for the impact that training had on the organisation. Do not attempt to evaluate everything but aim systematically to evaluate training that is strategic for the organisation or that plays an important role. Other training can be more simply assessed, provided that the results are collected, and it is shown that these do not exceed the costs involved. In the above case study every aspect of the five training activities that are strategic are evaluated, while the rest of the training is evaluated with an online questionnaire for easy application. Distribute the evaluation results to the training clients and establish mechanisms for their use, in order to optimise the training and enhance its contribution towards obtaining the organisation’s goals. The distribution and use of the results is what really gives meaning and value to the evaluation of training, enhancing the active involvement of all concerned.

Meeting these requirements ensures the implementation of a comprehensive evaluation plan. It would be desirable for professionals who are qualified in pedagogical matters to be responsible for designing and applying the plan, in order to make training evaluation a priority within the organisation. This would be an overall strategy, which involves a radical change in the way that many organisations view training and evaluation. The evaluation model presented here has been applied successfully in several organisations, as it has helped identify the actual results of training in terms of transfer, impact and profitability, as well as the pedagogical elements which can be improved to increase training quality. This is demonstrated by the good results obtained in the case study presented above. Application of the model requires an investment of resources for adequate evaluation; in the case study presented, the

Evaluation of training

691

JEIT 34,7

692

organization devoted numerous economic and human resources to evaluate training, but the result was worth it. Therefore, after seeing the outcome of the evaluation, several organisations have considered whether it would be better to train less and evaluate more – or, in other words, to train better thanks to the optimisation provided by a rigorous evaluation process. But beyond the gross value of the results provided by an evaluation, one cannot forget the positive effects that it has on individuals and the organisation as a whole. Evaluation always presents an opportunity for progress, and this represents added value to be exploited. Therefore: . training participants should see evaluation as providing guidance and an incentive to improve their learning; . to those in middle management evaluation offers the opportunity to guide and improve the follow-up of their collaborators; . training managers and trainers can exercise self-criticism to improve their services and optimise their relationships and their role within the organisation; and . management receives the opportunity to reflect on their management initiatives and strategies. Thus, there are many benefits to training evaluation. All that is needed is to set to work to achieve them. References Barzucchetti, S. and Claude, J.F. (1995), Evaluation de la formation et performance de l’entreprise, Ed. Liaisons, Rueil-Malmaison. ESADE (2005), Informe Cranfield ESADE. Gestio´n estrate´gica de Recursos Humanos, ESADE, ESADE. Eurostat (2009), “Continuing vocational training survey (CVTS2)”, Population and social conditions 3/2000/E/No17, available at: http://circa.europa.eu/Public/irc/dsis/edtcs/ library?l¼/public/continuing_vocational/eu_manual_pdf/_EN_1.0_&a¼d (accessed 23 October 2009). FTFE-D’ALEPH (2006), “La evaluacio´n de la iniciativa de acciones de formacio´n en las empresas. Iniciativa 2006”, internal document, FTFE, Madrid. Holton, E.F. (1996), “The flawed four-level evaluation model”, Human Resource Development Quarterly, Vol. 7, pp. 5-21. Kirkpatrick, D.L. (1998), Evaluating Training Programs, Berrett-Koehler, San Francisco, CA. Kontoghiorghes, C. (2004), “Reconceptualizing the learning transfer conceptual framework: empirical validation of a new systemic model”, International Journal of Training and Development, Vol. 8, pp. 210-21. Meignant, A. (1997), Manager la formation, Ed. Liaisons, Rueil-Malmaison. Noe, R.A. (1986), “Trainees’ attributes and attitudes: neglected influences on training effectiveness”, Academy of Management Review, Vol. 11, pp. 736-49. Phillips, J. (1994), Measuring Return on Investment, ASTD, Alexandria, VA. Pineda, R. (2003), in Schmidt-Lauff, S. (Ed.), “Continuous training in Spain”, Adult Education and Lifelong Learning, Verlag Rr. Kovac, Hamburg, pp. 145-76.

Pineda, P. (2007), “La formacio´n continua en Espan˜a. Balance y retos de futuro”, Relieve, Vol. 3, pp. 70-87. Russ-Eft, D. and Preskill, H. (2008), “Improving the quality of evaluation participation: a meta-evaluation”, Human Resource Development International, Vol. 11, pp. 35-50. Wade, P. (1994), Measuring the Impact of Training, Kogan Page, London. Further reading Holton, E.F., Bates, R.A. and Ruona, W.E.A. (2000), “Development of a generalized learning transfer system inventory”, Human Resource Development Quarterly, Vol. 11, pp. 333-60. Macpherson, A., Elliot, M., Harris, I. and Homan, G. (2004), “E-learning: reflections and evaluation of corporate programmes”, Human Resource Development International, Vol. 7, pp. 295-313. Swanson, R.A. and Holton, E.F. III (2002), Resultados: Co´mo evaluar el desempen˜o, el aprendizaje y la percepcio´n en las organizaciones, Oxford University Press, Me´xico City. About the authors Pilar Pineda holds a PhD in Education. Since 1994 she has held tenure as Professor of Economics of Education and Training in Organizations at the Universidad Autono´ma de Barcelona, Spain. She is an expert on evaluation of training and is a member of the Research Group on Educational Policy and Training (GIPE). Her most recent research projects include evaluation of training in the childhood sector in Spain (FTFE, 2006), evaluation of mathematics teachers’ training in Catalun˜a (Catalan Government, 2006-2008), evaluation of training for doctors in Catalun˜a (UCF, 2007-2008), analysing training transfer factors in the public sector (EAPC, 2009) and evaluation of training financed by the Spanish public administration (FTFE, 2010). She serves as a consultant for several public and private companies on training management, evaluation and quality. She is the author of five books and more than 20 papers on education and work. Pilar Pineda can be contacted at: [email protected]

To purchase reprints of this article please e-mail: [email protected] Or visit our web site for further details: www.emeraldinsight.com/reprints

Evaluation of training

693

Evaluation of training in organisations: a proposal ... - Ingenta Connect

Universidad Autónoma de Barcelona, Barcelona, Spain. Abstract. Purpose – Training is a key strategy for human resources development and in achieving.

313KB Sizes 0 Downloads 228 Views

Recommend Documents

(1527) Proposal to conserve the name ... - Ingenta Connect
ification, selecting as lectotype a Loefling collection. (Loefling 461, S-LINN: IDC 256.12), made near Madrid, probably the basis for the locality “Hispania” cited by.

Editorial - Ingenta Connect
Page 1 ... impurities on the electrical resistivity of nanocomposites reinforced with multi-walled carbon nanotubes (MWCNTs). Different purification methods are ...

Nature Conservation - Ingenta Connect
91904, Israel; e-mail: [email protected]. 2 Present address: University Botanical Garden, The Hebrew University, Givat Ram, Jerusalem 91904, Israel.

Editorial - Ingenta Connect
development of new techniques will extend the application spectrum of CNTs ... the environmental impact after the life cycle of composite material usage is still ...

The US Integrated Ocean Observing System in a ... - Ingenta Connect
The mission of the U.S. Integrated Ocean Observing System (IOOS®) is to de- ... that can be derived in whole or in part from IOOS data and information are ...

Smaczny 797..802 - Ingenta Connect
Is an alignment between business and information technology the appropriate paradigm to manage IT in today's organisations? Tomasz Smaczny. Australian ...

dawkins 108..119 - Ingenta Connect
University with a first class degree in English Literature. Abstract An ... corporate behaviour with stakeholder expectations is an ongoing business priority.

Technological management: expanding the ... - Ingenta Connect
used to be advised by management of tech- nology academics and practitioners to get a better understanding of the technologies they are implementing and ...

Comparison of OSIRIS stratospheric NO2 and O3 ... - Ingenta Connect
made by the Optical Spectrograph and InfraRed Imager System (OSIRIS) are compared with ... 3.0 and MART version 2.0) with O3 retrieved from the TAO-FTS.

The efficacy of online cooperative learning systems - Ingenta Connect
Findings – For decision-making tasks, audio conferencing has a significant impact on cooperative learning satisfaction but not on learning performance; while for ...

Return on relationships (ROR): the value of ... - Ingenta Connect
Relationship marketing, Customer relations, Intellectual capital,. Balanced scorecard, Business-to-business marketing. Abstract. This article is about ongoing ...

Role of bacille Calmette-Gu ´erin in preventing ... - Ingenta Connect
for Research in Medical Statistics, Madras Chapter, Chennai, India. SUMMARY. SETTING: Rural community in South India. OBJECTIVE: To determine the role of ...

A proposal on formative evaluation of Primary Teacher ...
service primary teacher-training program according to the need of Nepalese schools. ...... G. Gephart & J. B. Ayers (Eds.), Teacher Education Evaluation. Boston:.

Health care seeking among individuals with cough ... - Ingenta Connect
Stockholm, Sweden; † Department of Community Medicine, R D Gardi Medical College, Ujjain, Madhya Pradesh, India. SETTING: Ujjain district, Madhya Pradesh, India. OBJECTIVE: To describe and compare health care seek- ing among men and women with coug

The role played by Frederick Taylor in the rise of the ... - Ingenta Connect
management programs in the Progressive era. The paper ... writings from key figures in first generation university public- and business-management programs.

Role of bacille Calmette-Gu ´erin in preventing ... - Ingenta Connect
... in Tuberculosis, Chennai, India; †Virginia Tech, Blacksburg, Virginia, USA; ... from the Editorial Office in Paris and from the Union website www.theunion.org] ...

Integrating Six Sigma with ISO 9001 - Ingenta Connect
Faculty of Sciences and Technology, University of Coimbra,. Coimbra, Portugal, and. Francisco Fraza˜o-Guerreiro. Department of Strategy and Special Projects,.

The Gilbreths' quality system stands the test of time - Ingenta Connect
Department of Management and Operations, College of Business, ... Findings – It is found that most modern process management systems are based on the ...

Six sigma in service organisations
Six sigma is a powerful business strategy that yields a dramatic reduction in defects .... used in business and management research work, and choosing an appropriate ... account opening, payment handling, etc.) (www.helpingmakingithappen. ..... appro

inhouse proposal with training material -
Audio CD & Video DVD player and Projector with sound system – with 1 or 2. Screens (1 screen is sufficient for 60 people). 4. Visualizer cum Projector (to project ...

computer training proposal pdf
Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. computer training proposal pdf. computer training proposal pdf.

Community Organisations -
Mackay Youth Justice Service Centre. 41 Gordon Street. MACKAY. Tel 4421 4000. Fax 4799 7021 www.adcq.qld.gov.au. Register now to guarantee your place.