A FIELD STUDY OF PARTICIPANT REACTIONS TO A DEVELOPMENTAL ASSESSMENT CENTRE: TESTING AN ORGANISATIONAL JUSTICE MODEL

————— Michael M. Harris, College of Business Administration, University of Missouri-St. Louis, USA; Matthew Paese, Developmental Dimensions International, USA; Leslie Greising, Bethel College, USA. Correspondence concerning this article should be addressed to Michael M. Harris, College of Business, 1 University Boulevard, University of Missouri-St. Louis, St. Louis, MO 63121, USA. E-mail: mharris@umsl.edu A FIELD STUDY OF PARTICIPANT REACTIONS TO A DEVELOPMENTAL ASSESSMENT CENTRE: TESTING AN ORGANISATIONAL JUSTICE MODEL

Although assessment centres are being increasingly employed for developmental purposes, there has been a dearth of research regarding them. We investigated an organisational justice theory model suggested by Cohen-Charash and Spector (2001) in this relatively novel context. The model included antecedents (e.g., perceived validity), organisational justice perceptions (i.e., distributive justice and procedural justice), and one outcome (i.e., feedback utility perceptions). Most of our hypotheses were supported, suggesting much evidence for this model. The predicted effect for perceived fakability was not supported. Contrary to our hypothesis, distributive justice perceptions were at least as important as procedural justice perceptions in predicting feedback utility perceptions. A direct test of the effect of context on organisational justice theory is recommended.
Although assessment centres have been employed primarily for selection and promotion decisions (Howard, 1997), they are being increasingly used for employee development and feedback purposes (Spychalski, Quiñones, Gaugler, & Pobley, 1997). Developmental assessment centres are likely to differ in a number of ways from assessment centres designed for making HR decisions (e.g., employee selection or promotion decisions). A properly designed developmental assessment centre, for example, should invoke a mastery orientation with a learning emphasis on the part of participants, while an effectively designed HR decision assessment centre should invoke a performance orientation wherein participants are motivated to demonstrate their maximum performance capabilities. In spite of the surge of interest in assessment centres for developmental purposes, there is a paucity of research in this area. The research that has been conducted to date has focused on whether various behaviours, work attitudes, and related psychological measures change as a result of participation in a developmental assessment centre. Aside from some studies applying a performance feedback approach (e.g., Kudisch, Lundquist, & Smith, 2001), the extant research has largely neglected the relationships between developmental assessment centre procedures and features and participant reactions to the program. As a result, it is difficult to understand why the effects of developmental assessment centres have been mixed (e.g., Engelbrecht & Fischer, 1995;Jones & Whitmore, 1995) and sometimes short-lived (e.g., Fletcher, 1991;Fletcher & Kerslake, 1993). One major problem is that most of this research has been conducted without a theoretical framework (Lievens & Klimoski, 2001). Thus, one purpose of this paper was to address a gap in theorising about applicant reactions to a developmental assessment centre.
Because it is one of the most well-researched frameworks for understanding individual reactions, organisational justice theory may be helpful in understanding participant reactions to a developmental assessment centre. Organisational justice theory has been applied to many different kinds of human resource judgments and decisions, including performance appraisals (e.g., Erdogan, Kraimer, & Liden, 2001;Greenberg, 1986), employee selection (e.g., Bauer, Maertz, Dolen, & Campion, 1998;Ployhart & Ryan, 1998), pay raises (e.g., Folger & Konovsky, 1989), drug testing (e.g., Konovsky & Cropanzano, 1991), and layoffs (e.g., Brockner, Wiesenfeld, & Martin, 1995). According to organisational justice theory, individuals base their reactions to a decision in terms of two major considerations, namely, the outcome received and the procedures upon which the outcome was based (Cropanzano & Greenberg, 1997;Gilliland, 1994). The first consideration, generally referred to as distributive justice, assumes that people compare the outcomes they receive to the outcomes that others have received. Or, in the absence of such comparative information, people will compare the outcome they received with the outcome they expected to have received (van den Bos, Wilke, Lind, & Vermunt, 1998). The second variable, generally referred to as procedural justice, focuses on various factors that may affect how decisions are made. If the rules used for making the decision appear to be consistent, for example, perceptions of procedural justice will be higher.
Despite the wide range of decisions that organisational justice theory has been applied to, recent scholars have called for further investigation of the effect of context in organisational justice research (Cropanzano & Greenberg, 1997; see also Cohen-Charash & Spector, 2001). Specifically, they have argued that the nature of the situation is likely to affect the importance of various organisational justice factors. As an example, Cohen-Charash and Spector (2001) speculated that procedural justice may be more important than distributive justice when difficult decisions are being made, such as in a layoff context. A second purpose of the present study, then, was to examine organisational justice theory in a relatively novel context. Specifically, unlike most field-based organisational justice theory studies where the decision has an immediate effect on people's lives (e.g., pay raises, job opportunities), this investigation was conducted in a field setting where ratings and feedback were made for employee development purposes only. Next, we describe our hypotheses in greater detail.

Development of hypotheses
In a recent meta-analytic review, Cohen-Charash and Spector (2001) provided a model for organisational justice theory, incorporating antecedents of organisational justice (e.g., organisational practices), as well as the outcomes of organisational justice (e.g., attitudes). According to their model, organisational justice perceptions (e.g., procedural and distributive justice perceptions) mediate the relationship between the antecedents and the outcomes. As shown in Figure 1, we used their model to test relationships between selected antecedents and consequences of organisational justice perceptions.

Antecedents of organisational justice perceptions
Cohen-Charash and Spector (2001) separated antecedents of organisational justice into several categories, including factors that primarily affect distributive justice perceptions (e.g., valence of outcomes) and factors that Figure 1 Organisational justice theory model primarily affect procedural justice perceptions (e.g., adherence to justice rules). We grouped our antecedents in a similar fashion, as explained next.

Variables primarily linked to distributive justice
According to Gilliland (1993), test-takers compare the outcome (e.g., the actual test score) they receive with the outcome they expected to receive (e.g., the expected test score) in determining distributive justice. While equity theory has traditionally assumed that disparities in either direction will lead to perceptions of unfairness, research has found little evidence that doing better than expected leads to perceptions of unfairness (Greenberg, 1987). Rather, the more one's expectations are exceeded, the more fair the outcome is believed to be. Hence, the higher one's actual test score, and the lower one's expected test score, the more fair the outcome is likely to be perceived.
Despite the assumption that test-takers compare the outcomes they receive to their expectations in determining outcome favourability, most of the studies conducted in a selection context have only partially examined this factor. In a field study, Smither, Reilly, Millsap, Pearlman, and Stoffey (1993) reported that actual test scores were moderately related to distributive justice and only weakly related to procedural justice. They did not, however, examine the effect of expected test scores. Similarly, Bauer et al. (1998) examined outcome favourability (whether applicants passed or failed the test) and found this variable to be significantly related to a global measure of test fairness. Again, however, they did not consider expected outcomes. Finally, although Macan, Avedon, Paese, and Smith (1994) examined both self-ratings of performance on each test and actual test scores, they did not examine these two variables in tandem with each other. Moreover, they found quite mixed results. Specifically, while the expected cognitive ability test score was positively related to perceptions of the selection process and the organisation, a negative relationship was reported for expected assessment centre results. Actual scores showed a very small relationship with various perceptions of the selection process. Only one study has examined both expected outcome and actual outcome as determinants of test fairness. As hypothesised, Gilliland (1994) found that participants who expected to be hired, but were not, rated distributive justice lower than other participants. A similar, but less pronounced, effect was found for procedural justice. Finally, a number of researchers in the performance appraisal area have also investigated the role of ratings or actual outcome on employee reactions (e.g., Dipboye & de Pontbriand, 1981;Landy, Barnes, & Murphy, 1978). Few studies, however, have examined expected ratings, despite their importance (Fedor, 1991). Taken together, the above discussion suggests the following hypotheses: Hypothesis 1a: Actual assessment centre ratings will be positively related to organisational justice perceptions. Hypothesis 1b: Expected assessment centre ratings will be negatively related to organisational justice perceptions.

Variables primarily linked to procedural justice
Perhaps one of the most important determinants of procedural justice is the perceived job relatedness, or validity, of the tests (Gilliland, 1993;Hausknecht, Day, & Thomas, 2004). Indeed, Bauer, Truxillo, Sanchez, Craig, Ferrara, and Campion (2001) found that perceived validity formed a separate second-order factor, which they named "job-relatedness content", that was empirically distinct from other procedural justice variables. Several studies have demonstrated that there is a link between the perceived validity of a test and participant reactions to a testing program (e.g., Bauer et al., 1998;Gilliland, 1994;Smither et al., 1993). In their meta-analysis, Hausknecht et al. (2004) reported population correlations of .63 and .39, respectively, with procedural justice and distributive justice. Kudisch and Ladd (1997) reported that a scale measuring job relatedness was highly correlated with feedback acceptance by participants in a developmental assessment centre. Hence, there is ample evidence that perceived validity is related to organisational justice perceptions.
The final variable which we included as a possible antecedent of organisational justice perceptions is perceived fakability. Both Arvey and Sackett (1993) and Gilliland (1993) speculated that perceptions of fakability were likely to affect candidate perceptions of test fairness. Despite a dearth of research regarding the effect of perceived fakability on justice reactions, there is a growing interest in this little-studied construct (e.g., Viswesvaran & Ones, 2004). Kluger and Rothstein (1993) reported that different employment tests were rated as being differentially susceptible to faking; contrary to their predictions, however, they did not find a statistically significant relationship between perceived fakability and a global measure of test fairness. One possible explanation for the lack of a relationship follows from Gilliland (1993), who suggested that perceived fakability may affect procedural justice. However, Gilliland did not hypothesise that perceived fakability would affect distributive justice. Recent meta-analytic reviews have indicated the importance of studying different aspects of organisational justice separately (Colquitt, Conlon, Wesson, Porter, & Ng, 2001); because Kluger and Rothstein (1993) used a global measure of test fairness, they may not have detected a relationship between fakability and procedural justice. It seems appropriate to examine perceived fakability separately, then, for procedural and distributive justice. Based on the above discussion, the following hypotheses are offered: Hypothesis 2a: Perceived validity will be related to organisational justice perceptions. Hypothesis 2b: Perceived fakability will be related to organisational justice perceptions.

Organisational justice perceptions as determinants of perceived feedback utility
Like Cohen-Charash and Spector (2001), our model assumes that organisational justice perceptions affect attitudes. Research has suggested that, in general, both procedural and distributive justice perceptions are related to criteria of interest (Greenberg, 1987;Hausknecht et al., 2004). The theoretical focus has therefore turned to the relative importance of different organisational justice dimensions in predicting consequences (Colquitt et al., 2001). Towards that end, two theories have been offered as to the relative importance of distributive versus procedural justice. The distributive dominance model (Leventhal, 1980) posits that distributive justice will be more important relative to procedural justice. The two-factor model posited by Sweeney and McFarlin (1993) assumes that procedural justice has a larger influence on organisational outcomes (e.g., organisational commitment) and that distributive justice is more important for personal outcomes (e.g., pay satisfaction). Based on a meta-analytic study, Colquitt et al. (2001) reported only mixed support for these two models. In the developmental assessment centre context, where the focus is on a mastery orientation, it would seem that feedback utility should be more closely related to procedural justice than distributive justice. Furthermore, the ratings are not going to be used for HR decisions, and the feedback is for developmental purposes only. As a result, the level of the ratings (i.e., whether they are high, average, or low) should be less important than the processes (e.g., whether the assessment procedures were job related) used to obtain the ratings. Thus, it would seem that the fairness of the outcomes should have less importance than how the process was conducted. The following hypotheses were therefore made: Hypothesis 3a: Both procedural justice perceptions and distributive justice perceptions will be related to feedback utility perceptions. Hypothesis 3b: Compared to distributive justice perceptions, procedural justice perceptions will be more highly related to feedback utility perceptions. Finally, given the hypotheses above (i.e., our antecedents will affect organisational justice perceptions, which in turn will affect the outcome variable), as implied by our model, when organisational justice perceptions are taken into account, there will be no relationship between the antecedents and the outcome variable. Thus, the following hypothesis was made: Hypothesis 4: Distributive and procedural justice perceptions will completely mediate the relationship between antecedents of organisational justice perceptions and feedback utility perceptions. In sum, the purpose of this paper was twofold. First, we apply organisational justice theory to an area where a guiding framework is needed to better understand participant reactions. Second, the relatively novel situation examined here provides an opportunity to see whether previous findings regarding organisational justice theory will generalise across contexts.

Participants
The participants were first-line supervisors employed by an international automobile manufacturer who managed approximately 6-10 designers. The designers used engineering data to create 3-dimensional images of product components. The supervisors were primarily responsible for management, rather than technical, tasks. The organisation had implemented the assessment centre in a continuing effort to enhance the overall leadership of its supervisory workforce and to prepare the organisation for emerging challenges. Thus, a major purpose of the assessment centre was to provide these supervisors with an evaluation of their current strengths and weaknesses in terms of dimensions that would be important in the future.
Although all supervisors in the department were asked to participate in the developmental assessment centre, employees with less than one year of experience in the position and those employees who were very close to retirement were given the option of not participating. A total of 103 supervisors ultimately participated; only a few supervisors chose not to participate for the above reasons. However, two of the participants did not have a face-to-face meeting with an assessor, so they were excluded from the analyses. For 12 of the participants, both questionnaires were completed at the same time, so they were also excluded from the analyses. Missing data for some of the variables reduced the sample size to 72.
In terms of demographics, the majority of the participants were White (88%) men (89%) between the ages of 41-60 (81%). The average tenure at the company was 24.1 years (SD = 9.6 years) and the average tenure on the job was 5.6 years (SD = 5.8 years).

Procedure
By way of overview, participants went through a two-step process. First, they completed the assessment centre exercises. Second, they attended a feedback session regarding their assessment centre performance. After each step, participants completed a questionnaire for the purposes of this study. The first questionnaire, which was completed immediately after all of the exercises were finished, contained most of the antecedent measures (i.e., expected performance; perceived validity; and perceived fakability). The measure of actual performance, as described in greater detail below, was an independent measure completed by the assessors. The second questionnaire, which was completed after the feedback session was provided, contained the organisational justice perceptions and feedback utility perceptions measures. In light of Hausknecht et al.'s (2004) concern that the timing of measurements may affect the results, we felt it was particularly important to separate our questionnaires into two time periods.
In greater detail now, the first step of the process was a one-day assessment centre involving three exercises. The first exercise, an in-basket, introduced a fictitious organisation with background information on the participant's role, the company's product, and the relevant processes. Participants were given 3.5 hours in which to review and respond to a series of memos, faxes, letters and other materials. In the course of the in-basket exercise, participants were also given information on two separate role-plays. One roleplay involved a subordinate who was having a problem. Participants were instructed that they had 15 minutes to prepare for this meeting. The other role-play involved a co-worker who was interested in initiating an activity that would interfere with the participant's unit productivity.
The first questionnaire was completed by the participants immediately after all of the exercises were finished. Participants were independently evaluated by assessors on eight behavioural dimensions, such as oral communication, decision making, analysis, planning, and delegation. Each dimension was rated on a three-point scale (S (strength), A (acceptable), D (deficiency)). For each of these dimensions, assessors made ratings on several subdimensions, using a four-point scale, ranging from most effective ('++') to least effective ('--'). Oral communications, for example, had three subdimensions (i.e., mechanics, organisation, and delivery). There were a total of thirty-six sub-dimensions. Because of the limited range of the rating scale on the dimensions, we analysed only the sub-dimensions (for the purpose of the analyses described below, we converted these ratings to a 1-4 rating scale, such that '++' was a 4, '+' was a 3, '-' was a 2, and '--' was a 1).
The feedback session, the second step of the process, took place 3-5 days after the assessment centre exercises. Each participant received a final report of approximately 20 pages, which summarised his or her performance and provided detailed information about the behaviours demonstrated in each exercise, along with the ratings received for each exercise. A trained assessor discussed the report with each participant in a private session, clarifying the information and answering questions. The assessor also discussed possible developmental activities with each participant. The second questionnaire was completed immediately after the feedback session was finished. It is important to note that participants were informed that individual results were shared only with that employee; an aggregate report, summarising the group as a whole, was generated for management for planning purposes. The entire assessment centre process was designed and conducted by an outside consulting firm.

Questionnaire 1 measures
All of the antecedent measures were assessed on questionnaire 1, aside from the actual assessment centre ratings, which were made independently by the assessors. We adapted items from Smither et al.'s (1993) scales to create our perceived validity measure. Specifically, we selected items to measure job relatedness (e.g., "There is a clear connection between the in-basket exercise and my job") and items to measure feedback validity (e.g., "One can learn a lot about an employee's strengths and weaknesses from the results on the inbasket exercise"). These items were combined to form a 12-item perceived validity scale (α = .90). Fakability (α = .75) was measured using a two-item scale (e.g., "It would be impossible to fake good performance on the in-basket exercise"). The actual assessment centre rating (α = .82) was the sum of the 36 sub-dimensions that were rated by the assessors. The expected assessment centre rating (α = .78) consisted of three items (e.g., "Overall, how well do you think you performed in the assessment centre?") rated on a four-point scale (1 = "Somewhat below average" and 4 = "Well above average").

Questionnaire 2 measures
The organisational justice perceptions measures and feedback utility perception measures were obtained from questionnaire 2. Two items (α = .91) adapted from Smither et al.'s (1993) scale, were used to measure procedural justice (e.g., "Overall, I felt the assessment centre process was fair"). Three items (α = .92), adapted from Smither et al.'s scale, were used to measure distributive justice (e.g., "I deserved the results that I received in the assessment centre"). Three items (α = .79), adapted from Greller (1978), were used to measure perceived utility of the feedback (e.g., "The feedback from the assessment centre helped me learn how to do my job better").
Except for the actual and expected assessment centre ratings, all measures were made on a 7-point scale (1 = completely disagree; 7 = completely agree). The scale scores were the average rating for the items that comprised the scale.

Results
Means, standard deviations, and correlations for the variables are provided in Table 1. Several results are noteworthy. First, perceived validity had an average rating of 5.08 on a 7-point scale. This is roughly comparable to the figure that Hausknecht et al. (2004) reported for the favourability ratings of work samples (M = 3.61 on a 5-point scale). Second, procedural justice (M = 5.53; SD = 1.18) was on average rated much higher than distributive justice (M = 4.71; SD = 1.21) here, a difference of about two-thirds of a standard deviation. By comparison, Smither et al. (1993) reported a much smaller difference (about one-fifth of a standard deviation) between procedural (M = 3.71 on a five-point scale) and distributive (M = 3.50 on a five-point scale) justice. We also found a relatively high average rating for utility of the feedback (M = 5.44; SD = .83). Apparently, while participants were somewhat neutral about the distributive justice, they felt quite positive about the procedural justice and the utility of the feedback. Turning to the correlations between some of our measures, it is noteworthy that distributive justice and procedural justice were highly related (r = .80). Indeed, these two constructs correlated much more highly than metaanalytic results reported by both Cohen-Charash and Spector (2001) and Colquitt et al. (2001), who reported mean uncorrected correlations of .51 (for field studies) and .57, respectively. Nevertheless, other studies have reported relatively high correlations between procedural and distributive justice. For example, Gilliland (1994) reported a correlation of .72 between these two variables. It is possible, however, that the high correlation is due to the context, but what mechanism would cause that is unclear. Also noteworthy is the relatively low correlation (r = .20) between expected and actual assessment centre ratings. By way of comparison, Macan et al. (1994) found a correlation of only .10 between actual and expected performance on an assessment centre; the correlation between actual and expected performance on a cognitive ability test battery was much higher (r = .40). Based on these findings, it would seem that participants have a somewhat difficult time judging how well they do in an assessment centre and are more effective in judging their performance on cognitive ability tests.
To test Hypotheses 1a through 3b, we performed multiple regression analyses, using procedural justice perceptions, distributive justice perceptions, and feedback utility perceptions as the criteria. To test the mediation model predicted by Hypothesis 4, in addition to some of the analyses performed for our other hypotheses, we used a two-step multiple regression analysis (Baron & Kenny, 1986).
Recall that Hypothesis 1a stated that actual assessment centre ratings would be related to organisational justice perceptions. Indeed, as shown in Table 2, the multiple regression analyses indicated that this variable was significantly related to both procedural and distributive justice perceptions. Thus, Hypothesis 1a was supported.
As shown in Table 2, there was partial support for Hypothesis 1b, which predicted that expected assessment centre ratings would be negatively related to organisational justice perceptions. Specifically, while expected rating was significantly (and negatively) related to distributive justice perceptions, it was not significantly related to procedural justice perceptions.
Hypothesis 2a predicted that there would be a significant relationship between perceived validity and organisational justice perceptions. As shown in Table 2, this hypothesis was supported. Specifically, perceived validity was significantly related to both procedural justice and distributive justice perceptions. However, Hypothesis 2b, which predicted that perceived fakability would be related to organisational justice perceptions, was not supported. As indicated in Table 2, this variable was not statistically significant in either of the multiple regression analyses. In fact, as indicated in Table 1, perceived fakability was not significantly correlated with either procedural justice or distributive justice perceptions.
Hypothesis 3a predicted that procedural justice perceptions and distributive justice perceptions would be significantly related to feedback utility perceptions. Hypothesis 3b, however, suggested that procedural justice perceptions would be more closely related to feedback utility perceptions than distributive justice perceptions would be. A multiple regression analysis (F(2, 71) = 29.69, p < .01; R 2 = .46) including these two variables as the predictors, and feedback utility perceptions as the criterion, showed support for Hypothesis 3a. Specifically, both procedural justice (b = .33, t-value = 2.27, p < .05) and distributive justice (b = .38, t-value = 2.61, p < .05) perceptions had statistically significant beta weights. At the same time, Hypothesis 3b, which assumed that procedural justice would be more heavily weighted than distributive justice, was not supported, as the regression weight for procedural justice was smaller than the regression weight for distributive justice (note that the correlations were also nearly identical to each other -.65 and .64).
Finally, recall that Hypothesis 4 predicted organisational justice perceptions will completely mediate the relationship between the antecedents and feedback utility perceptions. To test this hypothesis, we performed a two-step multiple regression analysis. The results of this analysis are shown in Table 3. In the first step, we regressed our criterion, feedback utility percep- tions, on our antecedents (i.e., expected assessment centre rating, actual assessment centre rating, perceived validity, and perceived fakability). As shown in Table 3, these four variables explained 22% of the variance in feedback utility perceptions, with perceived validity showing a statistically significant beta weight. In the second step, we regressed our criterion on the four antecedent variables, as well as on organisational justice perceptions. The amount of variance explained in this equation was 49%. Only distributive justice perceptions was statistically significant in this equation. Most importantly, none of the four antecedent variables was significantly related to feedback utility perceptions. Because the other links in our model were largely supported (i.e., Hypotheses 1a-2a, and Hypothesis 3a), overall our results supported Hypothesis 4.

Discussion
Although assessment centres are increasingly being used as a developmental tool, there is little research regarding participant reactions to them (Lievens & Klimoski, 2001). We investigated an organisational justice theory model proposed by Cohen-Charash and Spector (2001) in this rather unique context. For the most part, our hypotheses were supported. Specifically, our hypotheses regarding relationships between actual assessment centre ratings, expected assessment centre ratings, and perceived validity with organisational justice perceptions were largely supported. Furthermore, as predicted, procedural justice and distributive justice percep- Table 3 Results of multiple regression analyses for feedback utility perceptions

Predictor
Step 1 Step 2 tions were related to feedback utility perceptions. Finally, as hypothesised, organisational justice perceptions mediated the relationship between the antecedents and feedback utility perceptions. Two hypotheses were not supported. First, Hypothesis 2b (which predicted that perceived fakability would be related to organisational justice perceptions) was not supported, as perceived fakability was not related to either procedural justice perceptions or distributive justice perceptions. Second, Hypothesis 3b (which predicted that compared to distributive justice, procedural justice perceptions would be more closely related to feedback utility perceptions) was not supported. Instead, we found that distributive justice perceptions and procedural justice perceptions were roughly equally important in terms of their relationship with feedback utility perceptions.
One likely reason for the lack of relationship between perceived fakability and organisational justice perceptions is that this assessment centre was for developmental purposes only. The fact that exercises may have been fakable was therefore likely to have been perceived as having little consequence. Perceived fakability may, however, have a greater effect on justice if the assessment procedure was used for making hiring or promotion decisions. In other words, the context may affect the importance of this variable. Alternatively, perceived fakability may be separated into motivation to fake and ability to fake. The focus of our measure was on ability to fake, which seemed more relevant to the procedural justice of assessment tools, as our logic was that if candidates believe that the assessment tools are more susceptible to faking, the procedural fairness may be more open to question. It is possible, however, that motivation to fake would produce different results than ability to fake.
While context seems likely to affect the importance of perceived fakability, it may at first seem surprising that Hypothesis 3b was not supported (i.e., distributive justice perceptions were at least as important as procedural justice perceptions in predicting feedback utility perceptions). More careful consideration of the literature on feedback, however, suggests that the favourability of the feedback may have an important effect on how the information is perceived by the recipient. Specifically, there is a large body of literature in social psychology (e.g., Swann & Read, 1981; see also Sedikides & Strube, 1997) indicating that individuals generally seek to avoid negative feedback. Similar research has been conducted in the work setting (e.g., Larson, 1989). This research suggests that rather than seeking out and using negative feedback to motivate one to change, negative feedback may simply be ignored or attributed to external forces beyond one's control. Thus, people have a tendency to avoid negative feedback and when they encounter such feedback, it may serve to demoralise them (Larson, 1989). Hence, in the present context, even when the ratings were not going to be used for admin-istrative decisions, distributive justice appears to play a significant role on how people respond.
In terms of future research, several suggestions are offered. First, there are other components of organisational justice that were not examined here. Specifically, Bies and Moag (1986) have introduced a construct referred to as interactional justice. In turn, this construct may be divided into two factors (Greenberg, 1993): interpersonal justice, which refers to the degree to which an individual is treated with dignity and respect by the decision-makers, and informational justice, which focuses on conveying information to people in terms of how outcomes were distributed. We suspect that both of these factors may play an important role in understanding participant reactions to feedback. Further research is needed to understand how they fit within the model that we tested here.
Second, research comparing the factors that affect participant reactions to a developmental assessment centre versus an assessment centre for administrative decisions (e.g., for the purpose of making a hiring or promotion decision) is needed. Such a study would enable us to more directly test the effect of context on organisational justice theory. Third, given that our variables affected a number of participant reactions, it is important that we consider the links between these determinants, participant reactions, and long-term, behavioural outcomes (e.g., engagement in career development activities) on the job.
Fourth, researchers should investigate additional factors that may affect reactions to the developmental assessment centre process. For instance, Ashford and Cummings (1983) described several different motivations (e.g., uncertainty reduction; competence motive) that individuals may have for seeking feedback. They argued that the nature of an employee's motivation will affect his or her search for and use of feedback. In the present context, such factors may affect participant reactions to the feedback received. Future investigations should measure participants' motivation in participating to determine whether this affects the relationship between organisational justice perceptions and feedback utility perceptions.
Fifth, there are other models of organisational justice besides the one presented here. Gilliland (1993) offered a rather similar model to understand applicant reactions in a selection context. We focused on the Cohen-Charash and Spector (2001) model for several reasons. First, the Cohen-Charash and Spector model is relatively more recent than the Gilliland (1993) model. Second, the Cohen-Charash and Spector model is more parsimonious than the Gilliland model. Finally, the Gilliland model was developed for a specific context (applicant selection), while the Cohen-Charash and Spector was designed for a much broader context (organisational behaviour). We felt, therefore, that the Cohen-Charash and Spector would be more appropriate in the present context, where employee developmental feedback is the focus.
Ultimately, we believe that the two models are quite similar. Future researchers may, however, wish to try to compare these and other models of organisational justice.
Finally, it would be interesting to compare participant reactions to different feedback processes, such as multisource ratings (e.g., Bracken, Timmreck, & Church, 2001), assessment centre exercises, and standardised testing methods to determine whether there are differences in how they are perceived. Indeed, while there is a plethora of studies examining how testtakers perceive different selection procedures (e.g., Kluger & Rothstein, 1993;Smither et al., 1993), there are few studies comparing different feedback processes. More research is sorely needed here.
This study has several potential limitations. First, the sample size was relatively small. As a result, the statistical power was only modest. Specifically, given the sample size, for a moderate effect size (i.e., a correlation of .30) and an alpha level of .05, the power was .74. Nevertheless, given that most of our hypotheses were supported, it would appear that statistical power was not a major problem for this study. Second, some of the measures were based on self-reports and therefore common-method variance may explain certain findings. On the other hand, three (i.e., expected assessment centre ratings, perceived validity, and perceived fakability) of our four antecedent variables were collected at a different time (i.e., 3-5 days earlier) than the organisational justice perceptions and feedback utility perception measures. In addition, the fourth antecedent variable, actual assessment centre ratings, was independently made by the assessors. Nonetheless, additional research looking at longer term outcomes, such as actual developmental activities by the participants, would be of great value.
Finally, this study examined only a limited number of variables. Researchers have examined other variables, such as the opportunity to perform, which may affect organisational perceptions (Schleicher, Venkataramani, Morgeson, & Campion, 2006). Variables such as opportunity to perform may play at least as important a role as perceived validity and need to be considered in future research as well.
In sum, we tested an organisational justice theory model in a rather novel context, where ratings were generated and shared with participants for feedback purposes only. Our study also provides a model for future research on developmental assessment centres. Despite the fact that the ratings were not going to be used for administrative decisions, our hypotheses regarding antecedent variables, organisational justice variables, and feedback utility perceptions were generally supported. This is also important in light of Hausknecht et al.'s (2004) concern that the timing of measurements may affect one's results. Somewhat surprisingly, given the context, distributive justice perceptions were related to feedback utility perceptions.