Over several recent blog posts, we’ve discussed the importance of finding a good talent fit for open positions and a good fit for the organization as a whole. Getting it wrong can lead to costly turnover and the need to continue spending time and resources on filling the same position over and over again.
We discussed that the challenge with traditional assessment methods is often that they focus on the candidate in a vacuum instead of in terms of the needs of the overall team or company.
We discussed an alternative approach focused on first identifying gaps in the current group in terms of skills and personality traits and using assessments to find candidates who can fill those gaps. And we discussed some specific examples of skills-based and personality-based assessments.
But, regardless of the type of assessment you end up selecting, there is a very real possibility that candidates will try to “game” the assessment by giving the answers they think you’re looking for rather than answers that are accurate reflections of their knowledge, skills, abilities, and preferences.
According to Alli Besl, PhD, there are four “faking” indicators that recruiters and talent managers can use to help identify instances where candidates may be trying to game the system.
Covariance Index (CVI)
“The covariance index is computed by identifying pairs of unrelated items from a personality assessment using a sample that has low motivation to fake, in this case, current job incumbents,” explains Besl. “The CVI is then applied to the sample of interest, job applicants, and fakers are identified.”
In other words, you would identify current employees who exemplify the behaviors, skills, and traits you value; have them take the assessment; and then look for similar results among those you are considering for open roles.
Bogus items are traps, in a sense. They do not represent true components of the job, meaning that if an individual reports having experience with such items, it’s a signal that he or she is attempting to game the system.
Social desirability involves scaling responses based on what are considered socially desirable responses. If we are being honest on assessments, we are not always going to be answering with the most socially desirable response. So, those whose answers score unusually high on the social desirability scale are likely faking their answers.
Blatant Extreme Responding (BER)
If we think of a question that starts with something like “On a scale of 1 to 5, how strongly would you agree that….,” where the 1’s and 5’s are “totally agree” or “totally disagree,” BER looks at how frequently candidates answer in the extreme—1’s and 5’s—and treats this as evidence of false answers.
Assessments can be a very important, and predictive, element of candidate selection. Hopefully, over the course of this four-post series on assessments, readers have gained a greater appreciation for the challenges faced by talent managers and recruiters in filling open positions and some of the tools and strategies available to help them make the right choices.