As a pre-employment testing provider, we offer both general aptitude and personality tests, as well as micro-skills tests such as typing tests and computer skills assessments. We’ve written about some of the differences between general tests and more specific tests, and we’ve found that many people continue to have misconceptions about the profound differences between general and specific tests, both in terms of the science behind them and the types of results companies should expect from them.
Cognitive ability, as is well known, is consistently one of the best predictors of job performance across just about every type of position. For many companies, using general aptitude tests to help inform their selection decisions is, pardon the pun, a no-brainer. Well-developed general personality inventories that measure stable behavioral traits can also be very effective in predicting performance and improving business metrics such as quality of hire and turnover. Accordingly, both aptitude and personality tests are validated against long-term outcomes such as productivity and tenure.
“Micro-skills” tests, on the other hand, are designed for a much narrower purpose: to measure proficiency with a particular acquired skill, for example, typing or using Microsoft Excel. As such, these tests can serve a valuable purpose: they help an employer verify an item from a resume. He says he knows C++. Does he really know C++?
But these specific tests are generally not very good at predicting long-term success, because they are not designed to do so. This is because in terms of judging long-term success, a typing test—even for a position that may require a lot of typing—is likely not going to be the main factor driving job performance in a given role.
There are certain exceptions to this—for example, for a court stenographer or a transcriptionist, rapid data entry may be such a central part of the role that typing speed and accuracy will be a good predictor of success. But for the average administrative assistant who does data entry as part of their job, there are many general qualities–such as problem-solving, attention to detail, critical thinking, or conscientiousness–that will likely have much more of an impact on overall performance than typing proficiency will.
In short, specific micro-skills tests have their place. But with respect to delivering on the central promise of pre-employment testing—that businesses can drive long-term improvements by incorporating evidence-based hiring tools—, micro-skills tests are not that effective.
The example of a software engineer is instructive. By testing a prospective engineer’s knowledge of a specific programming language, an employer can assess their level of knowledge on the date of hire. But programming languages evolve and become obsolete at a very rapid pace, and so hiring talented engineers who learn quickly and are great problem-solvers is often a more effective long-term approach to hiring great engineering talent. These are the qualities that general tests are designed to measure.
If the predictive power of micro-skills tests is limited compared to aptitude tests and personality tests, then why do so many companies focus solely on testing for micro-skills? As we noted, there are situations in which assessing micro-skills is vital. But in general, an over-reliance on micro-skills may reflect an understandable preference for short-term thinking. Hiring managers don’t want to be embarrassed when new hires show up and aren’t proficient in one of the micro-skills included in a job description.
This attitude is absolutely understandable, and especially so for temporary staffing firms who may be placing employees in roles for weeks, not years. But if employers are serious about using tests to improve hiring results and to drive long-term performance improvement, then micro-skills tests should be a side dish, not the main course.