Norming Personality Assessments

Posted by Jarrett Shalhoop on Mon, May 09, 2011

Last month I chaired a panel at the annual SIOP Conference in Chicago on the topic of norming personality assessments. We had participation from a number of other test publishers, and a couple of audience members that added some real value to the discussion. The topics ranged from things such as factors that influence norms, to the appropriateness of global norms, and the implications of highly specialized norms. Overall I came away with a greater awareness that we’re all dealing with the same issues, and pleasantly surprised that the thoughts in the field seem to be converging, at least to some extent. For those of you with an unquenchable thirst for all things norms, here’s a brief summary of some of the key takeaways.


1. Norms are critical for the interpretation of personality assessments. A reviewer of our SIOP submission suggested this might not be so clear cut. However, the entire panel and the active audience members were in complete agreement that without norms there is no effective way to interpret personality results.


2. There are a lot of factors that influence norms, and decisions about the appropriate level for norming are rarely obvious. Make a norm that’s too specific, and it likely loses interpretive value. Make a norm that’s too encompassing, and it is likely just averaging the true cultural differences to create a norm that isn’t really representative of anyone. The bottom line: selecting the appropriate level for norming is a both an art and a science. Select a level that is conceptually meaningful, representative of the target population, and then put a lot of work towards minimizing differences due to extraneous factors such as language.


3. In reference to benchmarking vs. norming, the panel seemed to agree that the appropriateness of each varies by the level of specificity. Norms are appropriate for macro levels of analysis (e.g., country). As the level of analysis gets more specific, benchmarks become more appropriate. If you’re thinking about making a norm for left-handed, midwestern, senior Account Managers in the pharmaceutical industry named Robert, you should probably reconsider.


4. A fascinating bit of research share by one of the panelists displayed personality characteristics in the US by state using a heat map. Check out Rentfrow, P. J., Gosling, S. D. & Potter, J. (2008). The findings? New Yorkers are as neurotic as you think, the West Coast is pretty high on Openness, and southern hospitality (Agreeableness) in real, though maybe not in Alabama.


5. Another good reference. For factors contributing to the variance in norms (error and otherwise), check out Meyer & Foster (2008). They have a 3-factor model that presents things nicely.


Overall, it was a good session with great contributions from the panelists and audience. We continue to struggle through some of the same issues, and hopefully collaborative efforts like this will help us arrive at a set of best practices and solutions to some the issues that have plagued the field for years.

Topics: personality assessment, norms, benchmarking, norm interpretation

Norming Personality Assessments

Posted by Hogan Assessments on Sun, May 08, 2011

Last month I chaired a panel at the annual SIOP Conference in Chicago on the topic of norming personality assessments. We had participation from a number of other test publishers, and a couple of audience members that added some real value to the discussion. The topics ranged from things such as factors that influence norms, to the appropriateness of global norms, and the implications of highly specialized norms. Overall I came away with a greater awareness that we’re all dealing with the same issues, and pleasantly surprised that the thoughts in the field seem to be converging, at least to some extent. For those of you with an unquenchable thirst for all things norms, here’s a brief summary of some of the key takeaways.

1. Norms are critical for the interpretation of personality assessments. A reviewer of our SIOP submission suggested this might not be so clear cut. However, the entire panel and the active audience members were in complete agreement that without norms there is no effective way to interpret personality results.

2. There are a lot of factors that influence norms, and decisions about the appropriate level for norming are rarely obvious. Make a norm that’s too specific, and it likely loses interpretive value. Make a norm that’s too encompassing, and it is likely just averaging the true cultural differences to create a norm that isn’t really representative of anyone. The bottom line: selecting the appropriate level for norming is a both an art and a science. Select a level that is conceptually meaningful, representative of the target population, and then put a lot of work towards minimizing differences due to extraneous factors such as language.

3. In reference to benchmarking vs. norming, the panel seemed to agree that the appropriateness of each varies by the level of specificity. Norms are appropriate for macro levels of analysis (e.g., country). As the level of analysis gets more specific, benchmarks become more appropriate. If you’re thinking about making a norm for left-handed, midwestern, senior Account Managers in the pharmaceutical industry named Robert, you should probably reconsider.

4. A fascinating bit of research share by one of the panelists displayed personality characteristics in the US by state using a heat map. Check out Rentfrow, P. J., Gosling, S. D. & Potter, J. (2008). The findings? New Yorkers are as neurotic as you think, the West Coast is pretty high on Openness, and southern hospitality (Agreeableness) in real, though maybe not in Alabama.

5. Another good reference. For factors contributing to the variance in norms (error and otherwise), check out Meyer & Foster (2008). They have a 3-factor model that presents things nicely.

Overall, it was a good session with great contributions from the panelists and audience. We continue to struggle through some of the same issues, and hopefully collaborative efforts like this will help us arrive at a set of best practices and solutions to some the issues that have plagued the field for years.

Topics: benchmarking

Subscribe to our Blog

Most Popular Posts

Connect