Pre-employment tests offer up a wealth of benefits, ranging from immediate gains (like a more efficient hiring process) to more long-reaching effects (like higher productivity and lower turnover). But how and when you decide to incorporate tests into your hiring process impacts what you might gain from administering assessments to your candidates. Testing is a somewhat unique hiring tool because of its flexibility: it can be added just about anywhere in the hiring process depending on where an employer feels it’s most beneficial. How tests may be best incorporated into an employer’s hiring process can depend on a variety of factors, such as the applicant-to-hire ratio or hiring timeline.
That being said, we generally suggest administering tests as one of the first steps of the hiring process, and we do so for several reasons. For one, testing early can help filter and sort candidates quickly by using an objective metric. Looking at aptitude assessment scores is not only faster than scanning resumes, it’s also far more predictive of future success in the role. It can certainly help you avoid spending too much time on a candidate who wouldn’t be a good fit. In addition, by gathering more data on candidates early on, employers also open themselves up to applicants who they may not have normally given a chance, but might be a great fit for the position based on their potential. This is particularly helpful when you’re looking at applicants without a lot of experience, like recent graduates.
Testing as a first step can also help reduce unconscious bias. As a hiring manager, you can receive objective, standardized, and predictive data on a candidate through test scores before you even glance at a resume or know their name. Utilizing an impartial metric as one of several hiring criteria not only helps identify the best person for the job, but also helps you do so in a fair and objective way.
But wait, a single online job post can receive hundreds of applicants across multiple platforms. Isn’t testing every single candidate expensive? The answer depends on the way a vendor structures its pricing.
Many testing companies’ pricing structures are based on a pay-per-test model. This means the customer pays for each test they administer. This model incentivizes putting testing later in the process, because employers prefer to save money by paying to test only the candidates they’re already interested in. The problem is, by testing only the top few candidates, they’ve already spent a lot of time and resources sorting through applicants. What if you narrow down your decision to two top candidates, you administer a test, and they both bomb it? You’re put in a situation where your top contenders may not be up to the task, and it would have been more efficient to have tested them earlier. By delaying testing until the end of the process, you may also have missed some great candidates by using less predictive hiring metrics (like resumes and interviews) to filter your applicants early on.
A pricing structure that is conducive to early testing is an unlimited testing model. In this model, a flat subscription fee grants unlimited access to one or more tests for a fixed period. This pricing structure allows customers to get the most for their money since they can test as many candidates as they like, as many times as they like, whenever they like, without having to pay for every assessment they administer.
Over the years, we’ve seen that testing upfront often has the biggest impact for our customers, but where to put pre-employment testing in the hiring process is ultimately up to you and your specific goals. Tests are there to provide you with a highly predictive and objective metric for making more informed hiring decisions, and they will have a valuable impact on your hiring goals no matter where you use them in the process.