Common myths about performance reviews debunked
Performance appraisals are one of the most ubiquitous, and also one of the most unpopular, protocols in workplace. In fact, several companies have recently made headlines in their attempts to go about them differently. But amid these changes, how many organizations have ever taken a close look at how performance reviews actually operate in their own workplace, over the long term?
My colleague Martin Conyon and I recently had the opportunity to take a deep dive to do just this, analyzing the performance appraisal data from a large U.S. corporation between 2001 and 2007. We looked at all of the scores and associated employment outcomes over these years, and what we found punctures many of the myths about performance reviews that have developed over time.
Myth 1: Assessment scores don’t vary much — most everyone gets an above average score and few, if any, get poor scores.
In fact, we found that appraisal scores did in fact vary quite a bit across individuals. Yes, there was an upward bias – the average employee was rated slightly above “average” on the appraisal scale — but the shape of the distribution looked surprisingly normal (measured via kernel density, a statistical technique for plotting data that smooths out irregularities). There were actually slightly more “poor” scores (the lowest rating) than “excellent” scores (the highest).
Myth 2: People who are good performers tend always to be good performers; poor performers tend always to be bad; and the workforce is made up of a stable group of A, B, and C players.
At the company we studied, there was little evidence that good performers this year would be good performers next year. In fact, knowing this year’s scores explained only one-third of the next year’s scores across the same employees. Changing managers didn’t appear to have any consistent effect on scores either, contrary to the view that supervisors get cozy with subordinates and give them higher scores over time.
There is simply no support for the simple idea that the workforce is made up of good performers who tend always to be good and poor performers who tend always to be bad and another group always stuck in the middle. Certainly there were employees who performed poorly over time, and they tended to get fired. But the notion that the appraisal score in any given year should be the basis for long-term outcomes – as when forced rankings lead to dismissals – has no support in these results.
Myth 3: Appraisal scores don’t drive pay or promotions. Managers are too timid, so they give modest increases to their best performers and rarely hold back on increases for poor performers.
We found no evidence that supervisors were holding back increases for their best performers and over-rewarding the slackers. In fact, we found the opposite: they disproportionately rewarded the best performers and disproportionately held back the worst. The best performers also got the best bonuses and were more likely to get promoted; the worst were more likely to get fired. In other words, they worked exactly as proponents hoped they would.
But we also found that supervisors didn’t simply use appraisals as a “settling up” exercise, as many economists assume, where merit pay increases were based solely on job performance over a single year. In fact, merit pay increases rewarded employees for improving performance year-over-year, not just for high levels of performance on a one-time basis.
Granted, this is only one company, although there is nothing about it that would cause us to think that its experience would be unique – and to our knowledge, there are no studies available suggesting they are not true elsewhere. So if you’re sure that performance appraisals at your own company don’t work, you ought to look at your own long-term data to make sure. Maybe they’re working exactly the way they were intended.
Author: Peter Cappelli