When Hiring, General Abilities Predict Success Better Than Specific Skills

How can you tell if your job applicants have what it takes to succeed in a particular position? There are so many factors that go into a hiring decision, and resumes can only tell you so much. Resumes are notoriously unreliable, with research suggesting that up to 78% of resumes contain misleading statements, while 46% contain actual lies. Similarly, your candidates’ work experience and educational background aren’t a guarantee that they possess critical thinking skills or problem solving ability, and these factors have been shown to be poor predictors of future job performance. Sometimes the best way to dig deeper into what your candidates can actually do is by testing their abilities.

When it comes to pre-employment tests, how do you decide which tests to choose or, more importantly, what abilities to test for in the first place? There are a lot of different types of tests, but most tests fall into one of two basic categories: general or specific.

General tests include cognitive aptitude tests and a lot of personality tests. At their core, general tests assess broad or innate abilities or characteristics that provide insight into a candidate’s potential for success. Alternatively, specific tests are, well, specific. They test specific skills that a candidate has picked up through education or work experience, such as typing speed or familiarity with Microsoft Excel.

In essence, the main difference between general and specific tests is that general tests measure potential, while specific tests measure acquired skills that candidates have already learned. It’s the classic dichotomy between aptitude and achievement.

As it turns out, general tests (cognitive aptitude tests in particular) are much better at predicting overall job success than specific skills tests. This is because general tests measure core abilities such as critical thinking, learning ability, and problem solving skills, all of which have an impact on how well an employee is able to adapt and thrive in a new position. One meta-analysis – or  summation of numerous studies done in this field – even found that cognitive aptitude tests were three times as predictive as job experience and over four times as predictive as education level.

General personality tests also have a lot of predictive value, particularly when they measure conscientiousness. Conscientiousness is a trait that is consistently correlated with job success because it indicates how goal-oriented, self-disciplined, and dependable an individual will be.

In contrast, specific tests tend to be less predictive of long-term success. While a specific test of “microskills” – such as a test on a particular programming language or a test assessing data entry skills – can help you find out if your candidate already knows how to perform a certain task, research shows that they do not tend to be great predictors of overall performance in the long-term. A general skills test that measures broader job-readiness competencies is an exception to this rule, but microskills tests typically only assess one limited part of the role. They do not assess a person’s ability to learn new skills, or adapt and grow as an organization or job evolves.  A general aptitude or personality test sheds light on that candidate’s long-term potential.

And because general tests measure broad abilities that are critical to success in many positions, they are predictive for a wide range of job roles. Both general and specific tests do have value when it comes to finding the right candidate, but using a more general test as your primary assessment, possibly in combination with a secondary skills test, is the best strategy for uncovering the candidates who are most likely to succeed.

Why is Measuring Quality of Hire So Difficult?

Amidst all the buzz over the advent of “big data,” HR departments are increasingly focused on using data to improve their talent acquisition strategies.  In our particular business—developing pre-employment assessments used by businesses to help inform their hiring decisions—we are seeing an increasing willingness on the part of employers to adopt evidence-based hiring tools.  The goal of all this is simple: better hiring results, or in other words, improvements in quality of hire (QoH).

There is widespread consensus about this: in a recent LinkedIn survey on recruiting trends in 2016, talent leaders cited quality of hire as the most important metric for tracking success in the recruiting process. Another finding, while not surprising, highlighted a central challenge that hiring managers face: only a third of the respondents felt that their methodologies for measuring quality of hire were strong.

It’s difficult to uncover what parts of the recruitment process are working without a metric for measuring job success once a person is hired. While tracking some QoH-related measures—such as retention—is relatively straightforward, getting to a unified performance metric that summarizes whether someone is a good hire or not can be very difficult.

We encounter this problem often when doing local validity studies, which are essentially a way to analyze how successful a pre-employment test is at predicting success for a particular role in a specific organization.  The typical process for doing these studies is to administer the tests to a group of employees—customer service reps, for example—and then to compare the test results to the employees’ performance metrics. By tying your employees’ pre-hire test scores to their eventual work performance, you gain insight into how predictive and effective your employee selection criteria is. This can give credence to your current tactics or help you identify ways to improve your recruitment process.

One problem that often arises with local validity studies is when companies don’t have meaningful performance metrics in place. Alternatively, they may be able to provide performance metrics, but have little confidence that the metrics reflect who top performers are, or can’t agree internally as to the appropriateness and accuracy of those metrics. This presents a huge problem: how can you predict what you don’t measure, or don’t measure accurately?  The CEO of Hogan Assessments, a competitor of ours, expressed the problem well when he wrote that using data-driven hiring techniques without tracking quality of hire is “the equivalent of investing a great deal of money in weather forecasts without subsequently paying attention to the actual weather.”

So whether quality of hire metrics come from supervisor performance ratings, tangible business metrics (such as sales volume or customer satisfaction ratings), retention rates, or some combination thereof, it is important to invest time in coming up with performance metrics that measure something meaningful and that all stakeholders agree represent something real. Absent this, there is no point in spending time trying to predict who will be a good hire if you can’t agree on a definition of success once the hire is made.

Can Aptitude Tests Be Used to Predict Bad Behavior?

We’ve previously written about the use of the Wonderlic aptitude test on NFL draft prospects, pointing out that the popular press and NFL fans as a whole have often unfairly dismissed aptitude tests as irrelevant to future gridiron success. This seems to be based on jock stereotypes about the sport and on a misunderstanding of how tests, and predictive tools in general, work.  Virtually every article about the Wonderlic test at the NFL draft mentions Dan Marino, who bombed the Wonderlic and went on to a Hall of Fame career, as evidence that the tests aren’t predictive of success in football. However, this type of anecdotal evidence clearly holds no weight when statistically determining whether or not a test works.

We’ve argued, for example, that there may be more of a correlation between Wonderlic scores of NFL quarterbacks and their future performance than is supposed. Nevertheless, it is fair to say that the evidence for the predictive power of the Wonderlic in the NFL is mixed. This is not surprising, because while the modern NFL game is quite complex and requires quick decision-making skills—especially from quarterbacks—it is clear that so many of the determinants of success in the NFL have to do with athleticism, work ethic, and other things aptitude tests can’t measure.

Recently, CBS Sports published a story about a new analysis of the links between Wonderlic scores and the subsequent fates of the NFLers who took it (and yes, it does contain the obligatory mention of Dan Marino). This one had a very different focus, however, because instead of examining on-field performance, the study looked at the relationship between Wonderlic scores and the arrest records of NFL players. The results of the study, which appeared in the American Journal of Applied Psychology, were striking; players with below average Wonderlic scores were twice as likely to be subsequently arrested as those who scored above the mean.

This is the first time we’ve seen a study that links low Wonderlic test scores to what the study calls “off-duty deviance,” or ODD, which may be our new favorite psychological term (“you down with ODD? yeah you know me.”)  Employers trying to prevent discipline-related problems in the workplace often use integrity/honesty tests or behavioral risk assessments that measure rule adherence or personality traits like conscientiousness that are linked to good behavior. Such tests have been shown to help prevent a wide variety of counterproductive work behaviors such as safety violations, absenteeism, illicit drug use, theft and fraud.  Aptitude tests, however, are more commonly used to predict overall performance, not who will constitute a behavioral risk.

But the new Wonderlic study is actually not the only sign of a possible link between intelligence and honesty.  The Washington Post recently reported on an Israeli study that seemed to link intelligence with honesty and truth-telling behavior. The study asked participants to enter a booth, roll a six-sided die, and report the number that came up to receive that amount of money instantly (if you roll a 4, you get $4, etc.). What they found was that those who scored lower on an intelligence test were far more likely to lie about rolling a six.

The implications of this study remain to be seen, so the results should be taken with a grain of salt. However, there seems to be growing evidence of a link between cognitive aptitude (intelligence) and other qualities that are typically thought to be purely behavioral or personality-driven. We expect to see a lot of future psychological research take on questions such as these, and we’re excited to see where the data lands!

Why Math Skills Are So Important in the Workplace

You’re forgiven if you didn’t know it was Math Awareness Month, but there are a lot of reasons why everyone should be more aware of the important role math plays in the workplace and in our everyday lives. With more and more evidence that Americans are falling behind in math ability compared to other developed nations, math ability is, in the United States at least, a gravely undervalued commodity.

You may think back to all the trigonometry you learned in school and point out that most jobs will never require you to find the cosine of an angle. But math skills are about much more than all the minutiae you were taught in school. Math skills – particularly numeracy and numerical problem solving – are not only fundamentally important to everyday job functions but also are a strong indicator of broader cognitive abilities. And because cognitive aptitude is one of the most predictive factors of job success, testing your candidates’ math abilities is a great way to assess their ability to succeed on the job.

Math and numerical problem solving are a part of most cognitive ability tests. This is partly because math problems aren’t simply measuring math skills; they’re also measuring critical thinking, problem solving, and logic. So even though you may be hiring for a position that doesn’t “require” math skills, measuring your candidates’ basic numeracy skills often has implications for their ability to solve problems in the workplace.

You might also think that testing math ability is unnecessary in the modern age because we have access to computers and calculators that can perform more complicated math functions for us. While we do have nearly constant access to computers, they can’t do all the work for us if we don’t fundamentally understand the math we need them to perform.

If anything, math abilities are more important than ever with the rise of big data. Companies are relying more and more on data to guide their decisions, and employees who can analyze and interpret data in ways that inspire actionable decisions are extremely valuable. Even employees who may not work directly with data are at a disadvantage if they can’t understand what the data is conveying on a basic level.

Mathematical prowess is an extremely critical, chronically overlooked ability. Math skills are associated with broader cognitive abilities, and they are reflective of a candidate’s critical thinking and problem solving ability. Yes, a lot of the math we learned in school doesn’t end up being all that relevant for the majority of us, but basic numeracy is unavoidable in everyday life, and those who do avoid it are at a fundamental disadvantage. And for employers seeking critical thinkers and problem solvers, aptitude tests that measure math skills are a great way to gain insight into your candidates’ abilities.

Introducing the UCAT, an Internationally-Friendly Aptitude Test

Today we’re excited to launch our new internationally-friendly aptitude test, the Universal Cognitive Aptitude Test, or UCAT. The UCAT measures general cognitive aptitude, one of the most predictive factors for job success.

Just like the CCAT, our most popular aptitude test, the UCAT measures critical thinking, problem-solving ability, and logic, all elements of cognitive aptitude. Because the UCAT and the CCAT are measuring the same abilities, they are highly correlated with each other.

What makes the UCAT different is that it deemphasizes verbal ability so that the test is ideal for use with non-native English speakers and international candidates. The UCAT places more of an emphasis on problem-solving, attention-to-detail, and data interpretation, making it a particularly great assessment for testing quantitative and analytical positions.

The UCAT is a 20 minute test with 40 questions. The test is written in English but can easily be translated into other languages or used as a test for non-native English speakers. Moving forward, we plan to make the UCAT available in other languages – let us know what languages you’d be most interested in using!

 

3 Mistakes to Avoid When Using Pre-Employment Tests

Pre-employment tests provide incredibly useful information that allows you to make more informed hiring decisions. By incorporating professionally developed pre-hire assessments into the hiring process, you gain relevant, objective data that, when combined with other factors such as interviews and work experience, can present a more comprehensive view of your candidate’s capabilities.

However, it pays to be mindful about how to use pre-hire tests in a way that provides the most value to your organization. Here are three of the biggest mistakes you could be making with pre-employment testing:

  1. Choosing the wrong tests. This is by the far the most important pitfall to avoid when using pre-employment tests. Test selection is vital because no matter how well-validated a test may be, it has little value if it isn’t measuring job-related capabilities. Test validity is often misunderstood—it does not exist in a vacuum, and even a well-validated test can be problematic if it’s being used for a purpose for which it was not validated.

    For example, you wouldn’t give a typing test to candidates applying to be maintenance workers if they won’t be expected to use a computer on the job. Their scores on such a test would prove meaningless for making a hiring decision.

    Even more importantly, as the EEOC has made clear in its Uniform Guidelines on Employee Selection Procedures (UGESP), the crucial standard in assessing compliance with respect to any criterion used in making hiring decisions—including tests—is that it must be job-related. Taking the time to evaluate the skills and abilities required for a particular position will enable you to select the tests that will provide the most valuable information while remaining legally compliant.

  2. Having unrealistic expectations. The right employment tests will help employers predict work performance. However, while pre-employment tests are predictive of success, they are not a crystal ball.

    When a pre-employment test has predictive validity this means that, on average and across a large sample of data, the test correctly predicts business outcomes. It does not mean that the test correctly predicts performance in every single individual case.  Outliers can and do happen now and then, but on average, there should be a significant correlation between test results and work performance.  Weather reporters make predictions that don’t always come true, and so do pre-employment tests. It is unrealistic to expect a pre-employment test to make the right prediction every time.

    Fortunately, pre-employment tests, and aptitude tests in particular, are some of the most predictive hiring criteria you can use. In fact, one study conducted by the National Bureau of Economic Research found that pre-employment tests are consistently better at predicting job success than are hiring managers.

  3. Ignoring the candidate experience. Let’s face it, not everyone loves taking tests. Pre-employment tests are designed to help employers find the best talent, but making the testing experience too burdensome can have the unintended consequence of turning off some candidates. Being cognizant of the candidate experience not only improves your employer brand but also minimizes the amount of drop-off you may experience from candidates who don’t feel invested enough to commit to a lengthy testing process.

    One of the most important elements of candidate experience when it comes to testing is the amount of time it takes a candidate to complete the tests. We generally recommend administering tests at the beginning of the hiring process to get the most value out of the test results. When testing candidates early in the process, a pre-employment test may be one of the first touchpoints a candidate has with the company, which means that if the company immediately assigns a 3 hour battery of testing, the candidate may quickly lose interest. We’ve done our own research on the subject and found that the level of candidate drop-off is minimal so long as the total testing time remains below 45 minutes.

    There are a lot of other ways to be mindful of the candidate experience, and a lot can be accomplished through thoughtful messaging. For instance, explaining to candidates why they’re being tested and what the tests measure can be helpful. Similarly, sending an email confirming that the candidate’s test results have been received goes a long way towards making your candidates feel that their time is respected.

The Cost of a Bad Hire and Reducing the Odds of Making One

It’s well known that hiring a bad employee can be incredibly costly. Estimates of the true cost of a bad hire vary widely depending on the type of position and amount of experience required. One estimate from the US Department of Labor places the average cost of a bad hiring decision at about 30% of the employee’s first-year salary. In another study conducted by CareerBuilder, 69% of companies surveyed were negatively impacted by a bad hire, and nearly a quarter of those employers stated that a bad hire cost them over $50,000.

Bad hires are costly in a lot of different ways, some of them less tangible than others. While the costs associated with hiring and training a new employee are obvious, bad hires can also have a negative impact on employee morale and overall productivity. As a (relatively) small business ourselves, whose customer base is made up of a lot of small and medium-sized businesses, it’s our position that the risks of a bad hire can be more dramatic for smaller companies. Smaller companies have significantly less bandwidth to put towards covering the duties of the vacant position and recruiting a replacement, and an unproductive or–even worse–a toxic hire can have a bigger impact on a small group than he/she will have on a larger organization.

Because of these costs, companies often strive to reduce the risk of hiring the wrong person as much as possible. There are a lot of reasons why a company might hire the wrong person, but 21% of the employers in a CareerBuilder survey attributed their bad hires to a failure to sufficiently assess employee skills in the pre-hire process. Employees who lack the necessary skills or abilities for the job will underperform, ultimately leading to involuntary turnover.

Testing your candidates in the pre-hire process is one of the best ways to minimize the risks posed by a potential bad hire. Resumes and interviews can only reveal so much information – one survey found that 56% of hiring managers have caught job candidates lying on their resumes, most of whom were embellishing their stated abilities. Administering pre-employment tests for vital job-related abilities is one of the few objective ways to accurately assess your candidate’s potential to fulfill the responsibilities of the job.

Despite the claims of some testing vendors, pre-employment tests can’t magically erase the chance of making another bad hire. We cringe, and potential customers should too, when we see testing providers make claims that their tests will prevent you from ever making a bad hire again. There are many things that tests cannot measure, and the best pre-employment tests arm hiring managers with predictive data that helps them make informed hiring decisions. Incorporating professionally developed pre-employment tests into your employee selection process is about reducing your hiring risk, not eliminating it.

Why Cognitive Aptitude is Such a Great Predictor of Job Performance

Cognitive aptitude tests are some of the best tools for predicting job performance. In fact, one of the best known reviews of research in the field of employee selection demonstrated that cognitive aptitude tests are far more predictive than some of the most common hiring criteria – they are twice as predictive as job interviews, three times as predictive as work experience, and four times as predictive as education level.*

Cognitive Aptitude

What is it about cognitive aptitude that makes it so good at predicting job performance? Cognitive aptitude is the ability to think critically, solve problems, learn new skills, and digest and apply new information; essentially these tests measure many of the qualities that employers look for in almost every job description they create. Because cognitive aptitude is associated with decision making ability and situational judgment, pre-employment aptitude tests often have even greater efficacy as a predictive tool the higher you move up the job ladder. The abilities that aptitude tests assess are well-suited for hiring employees who are, for instance, tasked with making independent decisions, coming up with big picture ideas, or managing others.  While the abilities measured by aptitude tests are drivers of performance for almost any job, they tend to be less predictive for roles that involve a lot of repetition and routine than they are for jobs that require problem-solving and frequent decision-making.

While cognitive aptitude tests measure general intelligence, they are not the same as pure IQ tests. Cognitive aptitude tests measure many of the same things that IQ tests measure, but they also measure other abilities that are more specifically relevant to job performance. For example, cognitive aptitude tests often measure attention to detail, an ability that is nearly universally applicable to every type of job, but is less commonly associated with “pure intelligence.” These are the types of abilities that drive job performance because they’re so relevant to the day-to-day tasks of many employees. Ultimately by blending practical abilities with general aptitude, pre-employment cognitive aptitude tests are highly successful at identifying the candidates who are most likely to succeed in their positions.

*Schmidt, F. & Hunter, J. (1998). The validity and utility of selection methods in personnel psychology: Practical and Theoretical Implications of 85 years of research findings. Psychological Bulletin, 124(2), 262-274.

 

Announcing Our New Blind Hiring Feature

The conversation around blind hiring is heating up in the HR world as more and more people are becoming aware of the effects that unconscious bias can have on the hiring process. In our last blog post on the topic, we discussed how blind hiring practices could serve as a valuable tool for combating unconscious bias.

More recently, the New York Times published an in-depth look at the potential value of blind hiring in light of the strong evidence that unconscious bias is negatively impacting the diversity of hires in a number of major industries. The article argues that employers often choose employees based on cultural fit, and that this reliance on human judgment when making hiring decisions is unintentionally impacting certain groups more than others. Even more importantly, powerful new research shows that this dependence on gut feeling may actually lead to worse hiring outcomes overall when compared to the predictions of less biased algorithms and pre-employment tests.

What this demonstrates is that people aren’t as good at making judgments as we’d like to think. The New York Times article demonstrates that statistically, there is a lot of untapped and undervalued talent in the candidate pool, and using additional tools can help hiring managers find the best talent. Tech companies, in particular, have begun to take a serious look at ways to eliminate unconscious bias in the hiring process in order to promote diversity and ensure they hire the best talent possible. Companies like GapJumpers and Blendoor have already begun to develop blind hiring solutions that tackle the diversity issue head-on.

We’ve been thinking about blind hiring practices and reducing unconscious bias for a while now, and so today we are releasing a new blind hiring feature within our pre-employment testing software, HireSelect. When administering our pre-employment tests to job candidates, HireSelect users now have the option to turn on the blind hiring feature, which will hide the names and email addresses of job applicants as the user reviews test results. Names and email addresses can often betray an individual’s gender or ethnicity, and hiding them allows you to examine the test scores in a less biased environment. This blind hiring tool can be turned on and off at any time, so employers can turn it on at the first stage of the process and then turn it back off once they decide who they want to reach out to for interviews.

This is one small step towards promoting a more impartial hiring process, and we plan on expanding the feature as blind hiring tactics evolve over time. We hope that employers find value in this tool and that it helps them find and hire the best talent available.

4 Reasons You Should Never Use the Myers-Briggs Test for Hiring

The Myers-Briggs Type Indicator (MBTI), one of the most well-known personality tests in America, has come under fire in the media recently because a significant body of evidence indicates that the test’s results are largely meaningless. This is a classic case of the popular press (belatedly) catching on to something that has been a virtual consensus among academic psychologists for a long time. And yet the MBTI continues to be widely used by companies and college career centers across the globe.

The test’s enduring popularity isn’t surprising. The MBTI sorts each test-taker into one of sixteen tidy personality types, each made up of overwhelmingly positive personality traits. These results can be a jumping off point for individuals to think about their communication styles and to explore different ways of viewing the world. Many organizations continue to use the test for team-building or improving collaboration between employees.

And unfortunately, some employers still use it for hiring, which we can say unequivocally is a mistake. The Myers-Briggs should NEVER be used as a pre-employment test or to help inform the hiring process. Here are four reasons why:

  1. It’s based on outdated science. The MBTI, originally developed over 70 years ago, is based on Carl Jung’s typological theory of personality. The MBTI divided certain elements of human personality into binary categories, sorting test takers into one of two categories across four traits. Modern psychological research shows that human personality cannot be accurately divided into discrete types, and tests that use this model tend to lack both reliability and validity. The study of personality has come a long way since then. More recent research tends to support a “trait over type” approach to personality, viewing personality traits like introversion/extraversion as dimensions or continuums rather than as binary absolutes. The most prominent personality test framework uses what is called the “Big Five” personality traits. These include five dimensions of personality that consistently emerge in empirical research: Agreeableness, Conscientiousness, Extraversion, Openness (to Experience), and Stability. The concept of personality “traits” measured on a continuum is now widely accepted, and has superseded the older personality “types” model that originated with Jung.
  2. The test is not reliable. Because the MBTI classifies people into types or buckets (described above in #1), it has poor reliability. One study on the MBTI demonstrated that when a sample population took the MBTI and then took the test again 5 weeks later, about 50% of people received different results. Because the test sorts people into types, a person who doesn’t have a strong inclination for one type over the other may be just a few questions away from being placed into an entirely separate category. This demonstrates that the test has poor test-retest reliability.
  3. It isn’t predictive of job performance. This is probably the most important reason you should never use the Myers-Briggs test for making hiring decisions. Studies have consistently demonstrated that the test fails to predict job performance in any meaningful way.  If the main reason to use a pre-employment test is to predict job performance, then a test that lacks this predictive validity is essentially useless as an employee selection device.
  4. The MBTI’s publisher itself explicitly discourages its use as a pre-employment test. The guidelines put out by the Myers & Briggs Foundation very clearly state that “it is not ethical to use the MBTI instrument for hiring or for deciding job assignments.” This is because of reason #3, that the test is not predictive of job performance. The Equal Employment Opportunity Commission (EEOC) requires that every factor used to make hiring decisions be job-related and “properly validated for the positions and purposes for which they are used.” The MBTI lacks the predictive validity of many other professionally developed and validated employment personality tests, and therefore its use in the hiring process is not ethical.

While the Myers-Briggs personality test should never be used as a hiring tool, there are plenty of validated, professionally developed personality tests that DO have predictive validity in the context of employment. When selecting a personality test for pre-employment testing, always look for tests that are backed by present-day psychological research and that have been validated to predict job performance for the types of positions you seek to hire. At a minimum, employment personality tests should have solid reliability and validity—which rules out the MBTI on both counts. Personality tests can be incredibly valuable tools for finding the best talent in your applicant pool, and using tests that produces meaningful, predictive results in the hiring process is the key to getting the most out of testing.