Why General Tests Deliver Better Hiring Results Than Specific Skills Tests

As a pre-employment testing provider, we offer both general aptitude and personality tests, as well as micro-skills tests such as typing tests and computer skills assessments. We’ve written about some of the differences between general tests and more specific tests, and we’ve found that many people continue to have misconceptions about the profound differences between general and specific tests, both in terms of the science behind them and the types of results companies should expect from them.


Cognitive ability, as is well known, is consistently one of the best predictors of job performance across just about every type of position. For many companies, using general aptitude tests to help inform their selection decisions is, pardon the pun, a no-brainer. Well-developed general personality inventories that measure stable behavioral traits can also be very effective in predicting performance and improving business metrics such as quality of hire and turnover. Accordingly, both aptitude and personality tests are validated against long-term outcomes such as productivity and tenure.

“Micro-skills” tests, on the other hand, are designed for a much narrower purpose: to measure proficiency with a particular acquired skill, for example typing or using Microsoft Excel.  As such, these tests can serve a valuable purpose: they help an employer verify an item from a resume. He says he knows C++. Does he really know C++?

But these specific tests are generally not very good at predicting long-term success, because they are not designed to do so.  This is because in terms of judging long-term success, a typing test—even for a position that may require a lot of typing—is likely not going to be the main factor driving job performance in a given role.

There are certain exceptions to this—for example, for a court stenographer or a transcriptionist, rapid data entry may be such a central part of the role that typing speed and accuracy will be a good predictor of success. But for the average administrative assistant who does data entry as part of their job, there are many general qualities–such as problem-solving, attention to detail, critical thinking, or conscientiousness–that will likely have much more of an impact on overall performance than typing proficiency will.

In short, specific micro-skills tests have their place. But with respect to delivering on the central promise of pre-employment testing—that businesses can drive long-term improvements by incorporating evidence-based hiring tools—micro-skills tests are not that effective.

The example of a software engineer is instructive. By testing a prospective engineer’s knowledge of a specific programming language, an employer can assess their level of knowledge on the date of hire. But programming languages evolve and become obsolete at a very rapid pace, and so hiring talented engineers who learn quickly and are great problem-solvers is often a more effective long-term approach to hiring great engineering talent. These are the qualities that general tests are designed to measure.

If the predictive power of micro-skills tests is limited compared to aptitude tests and personality tests, then why do so many companies focus solely on testing for micro-skills? As we noted, there are situations in which assessing micro-skills is vital. But in general, an over-reliance on micro-skills may reflect an understandable preference for short-term thinking.  Hiring managers don’t want to be embarrassed when new hires show up and aren’t proficient in one of the micro-skills included in a job description.

This attitude is absolutely understandable, and especially so for temporary staffing firms who may be placing employees in roles for weeks, not years.  But if employers are serious about using tests to improve hiring results and to drive long-term performance improvement, then micro-skills tests should be a side dish, not the main course.

When Hiring, How Important is Emotional Intelligence?

Emotional intelligence is a hot topic in HR lately and, at face value, it seems like an attribute that every great employee should have. But how do you define and measure emotional intelligence well enough to seek it out in your job candidates?

The answer is not so simple. Much of the ambiguity stems from competing definitions of what emotional intelligence is in the first place. There are two main models of emotional intelligence (EI), one based on abilities and another based on traits.

The ability model posits that people vary in their ability to process and think about emotions, and that this ability can be measured through adaptive behaviors. These behaviors include perceiving, using, understanding, and managing emotions, which this model measures through emotion-based problem solving tasks.

In contrast, the trait-based model measures EI through people’s self-perceived emotional abilities. EI tests that use this model require individuals to self-report their personality/behaviors based on prompts, similar to the way that many established personality tests assess individuals. There’s also a third “mixed” model popularized by Daniel Goleman’s 1995 book Emotional Intelligence, which is a combination of the ability and traits model. While there are pros and cons to each model, there is no general consensus within the scientific community about which one is more accurate.

To complicate things further, the research linking emotional intelligence to job performance shows very mixed results. One meta-analysis of dozens of studies on EI and the workplace concluded that the results so far are inconsistent. Noted psychologist Adam Grant, himself a fan of the new emphasis on emotional intelligence research, recently argued that the evidence does not yet support the use of EI tests to inform hiring decisions. In comparison, tests of cognitive aptitude (or traditional intelligence) are consistently shown to be much more predictive of performance than emotional intelligence.

This is not to say that emotional intelligence isn’t valuable in the workplace. Much of what we perceive EI to be may actually overlap with other more established measures. For instance, some evidence shows that EI may be linked to some traits commonly measured in personality tests, including agreeableness and openness, although the extent of those relationships vary from study to study. What’s more, EI is shown to be positively correlated with cognitive aptitude, suggesting that some components of EI may be encompassed within traditional intelligence.

But many questions still remain: How can we measure EI in a way that is predictive of job performance? What relationship does EI have to cognitive aptitude? What relationship does EI have to personality?

Here at Criteria, we think emotional intelligence is a really exciting frontier for research. While a lot of fascinating work is being done to uncover the link between emotional intelligence and workplace performance, the current research isn’t quite strong enough for us to recommend using it as a factor for making hiring decisions.

So while for now there might not be a well-validated EI test for hiring purposes, there are ways you can approximate emotional intelligence through other more predictive factors. In the meantime, we look forward to seeing what future research has to tell us about emotional intelligence and the workplace.

When Hiring, General Abilities Predict Success Better Than Specific Skills

How can you tell if your job applicants have what it takes to succeed in a particular position? There are so many factors that go into a hiring decision, and resumes can only tell you so much. Resumes are notoriously unreliable, with research suggesting that up to 78% of resumes contain misleading statements, while 46% contain actual lies. Similarly, your candidates’ work experience and educational background aren’t a guarantee that they possess critical thinking skills or problem solving ability, and these factors have been shown to be poor predictors of future job performance. Sometimes the best way to dig deeper into what your candidates can actually do is by testing their abilities.

When it comes to pre-employment tests, how do you decide which tests to choose or, more importantly, what abilities to test for in the first place? There are a lot of different types of tests, but most tests fall into one of two basic categories: general or specific.

General tests include cognitive aptitude tests and a lot of personality tests. At their core, general tests assess broad or innate abilities or characteristics that provide insight into a candidate’s potential for success. Alternatively, specific tests are, well, specific. They test specific skills that a candidate has picked up through education or work experience, such as typing speed or familiarity with Microsoft Excel.

In essence, the main difference between general and specific tests is that general tests measure potential, while specific tests measure acquired skills that candidates have already learned. It’s the classic dichotomy between aptitude and achievement.

As it turns out, general tests (cognitive aptitude tests in particular) are much better at predicting overall job success than specific skills tests. This is because general tests measure core abilities such as critical thinking, learning ability, and problem solving skills, all of which have an impact on how well an employee is able to adapt and thrive in a new position. One meta-analysis – or  summation of numerous studies done in this field – even found that cognitive aptitude tests were three times as predictive as job experience and over four times as predictive as education level.

General personality tests also have a lot of predictive value, particularly when they measure conscientiousness. Conscientiousness is a trait that is consistently correlated with job success because it indicates how goal-oriented, self-disciplined, and dependable an individual will be.

In contrast, specific tests tend to be less predictive of long-term success. While a specific test of “microskills” – such as a test on a particular programming language or a test assessing data entry skills – can help you find out if your candidate already knows how to perform a certain task, research shows that they do not tend to be great predictors of overall performance in the long-term. A general skills test that measures broader job-readiness competencies is an exception to this rule, but microskills tests typically only assess one limited part of the role. They do not assess a person’s ability to learn new skills, or adapt and grow as an organization or job evolves.  A general aptitude or personality test sheds light on that candidate’s long-term potential.

And because general tests measure broad abilities that are critical to success in many positions, they are predictive for a wide range of job roles. Both general and specific tests do have value when it comes to finding the right candidate, but using a more general test as your primary assessment, possibly in combination with a secondary skills test, is the best strategy for uncovering the candidates who are most likely to succeed.

Why Math Skills Are So Important in the Workplace

You’re forgiven if you didn’t know it was Math Awareness Month, but there are a lot of reasons why everyone should be more aware of the important role math plays in the workplace and in our everyday lives. With more and more evidence that Americans are falling behind in math ability compared to other developed nations, math ability is, in the United States at least, a gravely undervalued commodity.

You may think back to all the trigonometry you learned in school and point out that most jobs will never require you to find the cosine of an angle. But math skills are about much more than all the minutiae you were taught in school. Math skills – particularly numeracy and numerical problem solving – are not only fundamentally important to everyday job functions but also are a strong indicator of broader cognitive abilities. And because cognitive aptitude is one of the most predictive factors of job success, testing your candidates’ math abilities is a great way to assess their ability to succeed on the job.

Math and numerical problem solving are a part of most cognitive ability tests. This is partly because math problems aren’t simply measuring math skills; they’re also measuring critical thinking, problem solving, and logic. So even though you may be hiring for a position that doesn’t “require” math skills, measuring your candidates’ basic numeracy skills often has implications for their ability to solve problems in the workplace.

You might also think that testing math ability is unnecessary in the modern age because we have access to computers and calculators that can perform more complicated math functions for us. While we do have nearly constant access to computers, they can’t do all the work for us if we don’t fundamentally understand the math we need them to perform.

If anything, math abilities are more important than ever with the rise of big data. Companies are relying more and more on data to guide their decisions, and employees who can analyze and interpret data in ways that inspire actionable decisions are extremely valuable. Even employees who may not work directly with data are at a disadvantage if they can’t understand what the data is conveying on a basic level.

Mathematical prowess is an extremely critical, chronically overlooked ability. Math skills are associated with broader cognitive abilities, and they are reflective of a candidate’s critical thinking and problem solving ability. Yes, a lot of the math we learned in school doesn’t end up being all that relevant for the majority of us, but basic numeracy is unavoidable in everyday life, and those who do avoid it are at a fundamental disadvantage. And for employers seeking critical thinkers and problem solvers, aptitude tests that measure math skills are a great way to gain insight into your candidates’ abilities.

Why Cognitive Aptitude is Such a Great Predictor of Job Performance

Cognitive aptitude tests are some of the best tools for predicting job performance. In fact, one of the best known reviews of research in the field of employee selection demonstrated that cognitive aptitude tests are far more predictive than some of the most common hiring criteria – they are twice as predictive as job interviews, three times as predictive as work experience, and four times as predictive as education level.*

Cognitive Aptitude

What is it about cognitive aptitude that makes it so good at predicting job performance? Cognitive aptitude is the ability to think critically, solve problems, learn new skills, and digest and apply new information; essentially these tests measure many of the qualities that employers look for in almost every job description they create. Because cognitive aptitude is associated with decision making ability and situational judgment, pre-employment aptitude tests often have even greater efficacy as a predictive tool the higher you move up the job ladder. The abilities that aptitude tests assess are well-suited for hiring employees who are, for instance, tasked with making independent decisions, coming up with big picture ideas, or managing others.  While the abilities measured by aptitude tests are drivers of performance for almost any job, they tend to be less predictive for roles that involve a lot of repetition and routine than they are for jobs that require problem-solving and frequent decision-making.

While cognitive aptitude tests measure general intelligence, they are not the same as pure IQ tests. Cognitive aptitude tests measure many of the same things that IQ tests measure, but they also measure other abilities that are more specifically relevant to job performance. For example, cognitive aptitude tests often measure attention to detail, an ability that is nearly universally applicable to every type of job, but is less commonly associated with “pure intelligence.” These are the types of abilities that drive job performance because they’re so relevant to the day-to-day tasks of many employees. Ultimately by blending practical abilities with general aptitude, pre-employment cognitive aptitude tests are highly successful at identifying the candidates who are most likely to succeed in their positions.

*Schmidt, F. & Hunter, J. (1998). The validity and utility of selection methods in personnel psychology: Practical and Theoretical Implications of 85 years of research findings. Psychological Bulletin, 124(2), 262-274.


The Secret to Google’s Hiring Revealed: Cognitive Ability

Last summer we reacted to an interview with Laszlo Bock at Google who seemed to say that tests scores and grades were useless predictors for hiring decisions. We said that what constitutes information for hiring purposes at Google may well differ from what constitutes information for hiring elsewhere, and we pointed out that validating a selection tool after it has been used, and only for those who were selected will typically yield lower estimates of the usefulness of that tool.

This week, in a widely read New York Times column, we get a more elaborated answer about Google’s hiring goals. What do they look for? Number 1, says Mr. Bock, is cognitive ability. Although Bock is quick to distinguish this from IQ — he sees it as demonstrating an ability to learn quickly — the fluid intelligence he’s trying to evaluate likely correlates well with traditional clinical, academic, and business oriented measures of cognitive ability. Bock is also looking for leadership and a sense of responsibility.

In short, Google is largely looking for the same things that organizational psychologists have been telling us for decades predict job performance — cognitive ability and personality. Measures of conscientiousness are often the second best predictor of job success (after cognitive ability). Other preferred aspects of personality will depend on the nature of the work and the workplace.

For any given selection process, those making decisions want predictive information. What constitutes predictive information will vary from setting to setting. For a company like Google, the composition of the applicant pool and the nature of the workplace might mean that certain traditional sources of information are less useful, and Google has the resources to invent a new, tailored interview process to gather new information. However, the underlying constructs they are looking at — cognitive ability and conscientiousness — are ones that pre-employment assessments have been highlighting for some time. Every organization must also deal with its own costs — what are the consequences of hiring the wrong person? What are the consequences of failing to hire a qualified person? We mostly think about the cost of hiring the wrong person (false positives), but there is also a cost to missing a diamond. Facebook paid $19 billion to buy what Brian Acton built (WhatsApp) 4 years after they didn’t hire him.
But even if they go about it differently, all companies are trying to maximize the information they have about the cognitive ability and character of the people they hire.

Train Your Brain to Boost IQ?

Today’s blog post is by Eric Loken, Criteria’s Chief Research Scientist and a member of Criteria’s Scientific Advisory Board. Eric plays a leading role in the development of Criteria’s employment tests.

Last week there was an article in the New York Times that described a study finding that intelligence might not be the constant, innate quality that it is usually assumed to be. Researchers at Michigan showed that when a group of participants practiced a challenging cognitive task for two to three weeks, they scored better on a standardized measure of intelligence.

At first this sounds like the kind of obvious effect that commercial test preparation companies pass off as a marketable service. It’s well known that if you take a group of students and give them practice SATs over and over again, their scores will go up slightly, even if they haven’t paid $1,000 for the privilege of practicing.

But the Michigan study is different because they showed something called transfer. The participants in the study started by taking a matrices pattern test, supposed to be a culture-free intelligence test where success doesn’t depend on the kind of skills and knowledge developed in school. Then they trained on a difficult attention and working memory task called the n-back test (Criteria’s MRAB aptitude test contains a very similar task). The participants in the training group were pushed 20 minutes a day for up to 19 days to get better on this task, and they did. (Now the control group during this time was basically doing nothing which is a bit of a flaw in the experiment, but we’ll let that go for now.)

The point of the study is that the matrices intelligence test is a different task from the one the group was training on, and yet the training transferred over to yield improved performance. This study caught our attention for a few reasons. First, the control group showed improvement in their matrices test scores (despite just sitting around). In general, people don’t perform at their best the first time they take a test, and they will improve the second time around just because of practice or familiarity. This is something to keep in mind with employee testing — if for whatever reason you have to give a candidate a test for a second time, even if you use a different form of the test you shouldn’t be surprised to see a mild improvement over the first score (this is sometimes called the “practice effect.”)

But the most important finding of the study is that the group who practiced the memory task improved their scores by a wider margin. It’s interesting to think about what the study says about the effects of the workplace on intelligence. Employers are obviously looking for intelligent employees who will have a positive impact on their organization. Employers should also keep in mind that the workplace environment will impact the intelligence of the employees. We’re not sure it would serve the interests of productivity to set aside 20 minutes a day for “cognitive training” (although similar proposals exist in the interests of maintaining employee health and thus reducing healthcare costs). But it is worth remembering that a challenging work environment will likely keep skills and minds sharp.

This study is the latest in the age-old debate over “brain plasticity” and the extent to which our mental ability is fixed. We’ll probably have more discussion on this topic as we keep track of which way the pendulum is swinging.