By building predictive analytics into the video interviewing process, companies can benefit from automating their recruitment process and, more importantly, predict future high-performers from high-volume talent pools.
Many of LaunchPad’s clients have now been collecting data via our platform for more than eight years. Having collated healthy, robust and accurate information from every data point of every recruitment journey over an extended timescale, we’re now able to help them use this data to move to predictive analytics for hiring via LaunchPad PREDICT.
Here, LaunchPad’s Kenko Fujii shares his tips for best practice when collecting candidate data from video assessments with a view to adopting predictive analytics.
1. Carry out a job analysis and review interview criteria
A job analysis is the foundation for building an accurate data set. The job analysis sets out the key skills and attributes required from the person who is going to be carrying out a role. The output is a description of the skills, aptitudes, behaviours and values that underpin effective job performance. This is often expressed in terms of a competency or strengths framework.
This step helps clarify what to ask the candidate during the video interview and what to measure through your review criteria. Effective video assessments are designed to be job relevant, with questions designed to elicit responses that provide information which can be assessed against the competency framework.
2. Ask five or more questions with individual review criteria.
LaunchPad always recommends asking five or more questions in a video interview as this helps measure a wide range of characteristics for each job. Plus, by having five questions and review criteria scores, we can better differentiate between good and poor fit candidates which is crucial to building effective and valid predictive models.
Make sure your questions are clearly related to the role, therefore motivating candidates to respond appropriately. Using well-designed question and review criteria helps avoid generating skewed datasets that make it challenging to create valid predictive models.
3. Use of a rating scale of five points
LaunchPad advises using a five-point rating scale with clear and specific review criteria. Standardised rating scales (anchored to five points) results in greater consistency among reviewers when scoring candidates. The review criteria should describe the expected behavioural response of both good and poor candidates.
A minimum five-point scale also enables reviewers to differentiate more readily between strong and weak answers. Research shows that using a smaller rating scale than five points results in greater error and poorer differentiation between strong and poor applicants.
4. Quality assurance during human review
The judgements of human recruiters can be affected by a number of different biases and potential distortions. Unconscious bias, primacy or recency effects, stereotyping, overconfidence, even emotional mood and hunger can create variability in a reviewer’s assessment of a candidate.
Our research has shown that when human reviewers were asked to make judgement on the same group of video interview candidates, that the group of reviewers disagreed by two points or more on a five-point rating scale on 50% of the candidates.
LaunchPad recommends companies take a rigorous approach to eliminating bias. At the beginning of a recruitment process Launchpad’s Calibrate module requires reviewers to initially rate the same set of candidate responses followed by a reviewer discussion exploring any judgement differences thus supporting consistency and benchmarking.
Building predictive analytics into the video recruitment process is already saving our clients time, reducing hiring costs, and enabling them to make more successful hires. If you’d like to learn more about building a robust data set for predictive analytics or to find out about LaunchPad PREDICT please get in touch.