Striking a Balance Between Quality and Speed in the Hiring Process

Most hiring tools are designed to accomplish two primary tasks: to more accurately identify quality candidates, and to make the hiring process move more quickly and efficiently, for both employers and job seekers.

It’s obvious why you’d want to prioritize finding quality candidates. A bad hire can cost, on average, about 30% of the employee’s first-year salary according to an estimate from the US Department of Labor. What’s more, quality candidates can have an often intangible positive impact on an organization by increasing productivity, inspiring new ideas, and boosting morale. Identifying the best candidates among a sea of resumes can be a challenge, but it’s unquestionably worth the effort.

However, it’s equally clear why you’d want the hiring process to move as smoothly and efficiently as possible. Applying for jobs on the web has become almost TOO easy; it’s no surprise that an average of 250 resumes are submitted for every online posting. In this environment, and with time to hire being an important metric for talent management professionals, there’s a clear need for tools that move candidates through the evaluation process as quickly as possible.  Together these two goals, quality of hire and time to hire, are hallmarks of a successful and rewarding hiring process. But as everyone who has done a lot of hiring knows, sometimes these two goals clash.

Pre-employment tests are one area where quality of hire and speed intersect.  In general, gathering predictive data by using tests early in the hiring funnel can help streamline the hiring process by helping hiring managers direct their energies to evaluating candidates who are more likely to succeed. All other things being equal, it is also generally true that the longer a test is, the more accurate the information that it will gather.

For example, an aptitude test that is three hours long will generally have greater predictive validity and reliability than a 5-minute test would. But very few applicants would sit through a three-hour test as one of the first steps in an employment application process, and so not many employers would be willing to introduce that amount of friction into their hiring process.   Especially in scenarios in which tests are being used as an initial screen, less is often more.  More about how much testing is TOO much for candidates here.

The good news is that advances in psychometrics and testing technology offer hope that there are ways to deliver tests that aren’t long, or don’t feel long, without compromising the quality of information these tests yield about candidates. Adaptive testing and assessments that incorporate “gamified” experiences are two examples of how tests can deliver great informational accuracy without sacrificing candidate experience. As testing technology improves, employers will increasingly not have to face a binary decision in which quality and speed of assessment are a zero sum game.

 

How Integrity Tests Get Honest Answers

Integrity tests are one type of pre-hire personality test that seeks to determine how likely it is that a person may engage in counterproductive work behaviors, such as theft, fraud, or tardiness. Most personality tests measure a person’s traits and behavioral tendencies through a series of targeted questions, but the way in which different tests go about asking these questions can vary quite a bit.

There are generally two types of integrity tests: overt and covert. Overt tests are fairly transparent about what they are asking you – an overt question might directly ask if you have ever stolen anything or how prevalent you think workplace fraud is. In general it is pretty obvious what an overt integrity test is attempting to measure. In contrast, covert tests go about assessing behavior indirectly by assessing personality traits associated with counterproductive work behaviors.

Covert questions essentially use an indirect approach to gauge the same type of information that overt questions do. However, one advantage of covert integrity tests is that it is more difficult for job candidates to manipulate an integrity test that uses covert questions.

Most personality tests require self-reported answers, meaning candidates answer questions about themselves. So candidates can potentially alter their responses to make themselves appear more desirable to employers. And while many tests have internal validity measures that monitor and flag test results whenever a candidate’s responses appear inconsistent or exaggerated, covert integrity tests make it more difficult for candidates to misrepresent themselves in the first place.

Let’s dig deeper into how this works in the context of integrity tests. One extremely basic example of an overt question could ask how much you agree with the following statement: “I think stealing is wrong.” There’s not a lot of nuance in this question, so any reasonable person applying for a job would probably answer strongly in the affirmative, that yes, I agree that stealing is wrong. So this question alone may not be all that helpful in identifying people who have a lax perspective on stealing.

A covert question might assess personality traits such as conscientiousness with statements such as “I pay my bills on time,” “I try to follow the rules,” or “I follow a schedule.” Research has shown that people who are conscientious, organized, diligent, and follow through on plans tend to be less likely to engage in counterproductive activities or ignore rules at work. This is how covert integrity tests infer the likelihood of bad behavior based on associated personality traits.

Covert tests have the added advantage of being tougher to “fake” because they are less transparent, but for both types of integrity tests, the concern about faking can be counteracted through the use of rigorously tested validity scales that detect and correct for such tendencies.  Some integrity tests, such as the Workplace Productivity Profile (WPP), contain elements of both overt and covert integrity tests. Research shows that both types of tests can be effective selection techniques that can help companies manage risk effectively.

Should You Ask Job Candidates About Salary History?

Recently, Massachusetts passed a law prohibiting employers from requiring job candidates to divulge how much they earned in their last position. Massachusetts is the first state to pass a law of this kind, which will go into effect in 2018. Although the new legislation was designed to help close the wage gap hindering women from earning as much as men, the law will effectively help people of all backgrounds who are seeking to advance their careers.

The logic behind the new law is compelling. People who start out their careers in low paying jobs have a hard time advancing to higher salaries if their pay is based on their previous salaries. This leads to a cycle of low pay that is difficult to break.

While this law may appear to hurt employers by eliminating one way to gauge what salary to offer a candidate, a much better way for employers to frame the question is to ask candidates about their salary expectations. This way, candidates can provide an estimate of their own value and experience without being held back by their current salaries.  Savvy candidates who are asked about their salary history know how to deflect the question back to the employer by asking the interviewer what the salary expectations are for the role.

And ultimately, employers should resist using metrics that aren’t predictive of job performance when evaluating a candidate’s worth and potential salary. Previous salaries shouldn’t be an indication of a job candidate’s current, or future, worth as an employee. There is no research that we know of that says that a candidate’s previous salary is predictive of job performance in a new role, but there are plenty of other factors with proven correlations to job success. Aptitude tests (one of the most predictive hiring factors), work samples, resumes, work experience, recommendations, and education are all much stronger indicators of a candidate’s capabilities.

This law is yet another example of the concept of unconscious bias coming to the forefront in the hiring process. With more and more companies making a conscience effort to diversify their hiring efforts, understanding unconscious bias is a good first step. Blind hiring is one practice some companies are adopting as a way to minimize the effect that a person’s demographic information has on hiring outcomes. And using more objective or standardized measures to evaluate candidates, such as pre-employment tests, can go a long way in promoting fair hiring practices.

 

Which States are the Smartest?

Cognitive aptitude is one of the best predictors of job performance because it measures so many key drivers of work success – the ability to solve problems, think critically, and learn new skills. But does cognitive aptitude vary from state to state?

Every year, hundreds of thousands of job seekers take the CCAT, our most popular aptitude test. Using a sample of nearly a million CCAT scores, we decided to dig a little deeper to see which states came out on top.

Of course, there are significant caveats to the following data: it is based on a sample from about 5,000 employers across all 50 states who administered the CCAT to job applicants, so we don’t claim that it is a representative sample. If certain states tend to hire more skilled positions, for example, then the data will be skewed. It will also be significantly influenced by the makeup of our customer base, which varies from state to state. On the other hand, the data is based on almost a million test results, so it’s by no means a small sample. Here’s what we found:

CCAT Map-01

New Hampshire won the top position, just barely beating out Virginia, the runner up. (Maybe we SHOULD let New Hampshire choose our next president!) Filling out the rest of the top 10 are Massachusetts, Idaho, Wisconsin, Texas, Ohio, Vermont, Maine, and North Carolina.

The average overall score on the CCAT is about 25, and within this data set there is a range of about 6 points from the highest to lowest scoring state, which is approximately one standard deviation. So while the gap between the highest and lowest scoring states isn’t that large, there is still some noticeable variation.

This list is based on a massive sample of 996,544 test scores from all 50 states, but states with smaller populations tended to have fewer test scores represented in the sample. As a result, smaller states were more likely to cluster at either of the two extremes – at the top or bottom of the list. This is commonly observed in statistical analyses, and we’ve observed it before when studying variations amongst the 50 states.

How does your state stack up? See the full list below:

CCAT Map-02

Why You Shouldn’t Use the DISC for Hiring

The DISC test is one of the most widely used personality assessments, but it shouldn’t be used for making hiring decisions. Why? Simply put, it’s not predictive of job performance.

DISC assessments are based on the DISC theory of personality developed by psychologist William Marston in the 1920s. Most DISC tests measure personality along four traits that make up the DISC acronym: Dominance, Influence, Steadiness, and Conscientiousness.

The DISC sorts people into categories based on self-reported answers.  For instance, the DISC might categorize you as a blend of the D and I traits, which are then used to describe your behavioral tendencies. The DISC does have a lot of value as a tool for improving self-knowledge and facilitating teamwork within an organization. However, when it comes to pre-employment testing, the DISC’s use of discrete trait categories is one of its main weaknesses.

Like the Myers-Briggs Type Indicator (which you should also never use for making hiring decisions ), the DISC classifies people into types or buckets instead of describing traits across a spectrum. This goes against trends in modern psychology – recent research tends to support a “trait over type” approach to personality, which views personality traits as continuums rather than as binary absolutes.

So for instance, a type-based test might categorize you as a Conscientious type, while a trait spectrum test might determine that you are 74th percentile in conscientiousness. There’s a big difference in specificity between these two outcomes and, as a result, tests that attempt to categorize people into types tend to lack both reliability and validity when compared to percentile tests.

Another downside to using the DISC in the hiring process is that it is not a normative assessment. Normative tests are able to compare one person’s scores with the scores of others in a larger population. That’s how normative personality tests are able to provide you with percentile scores – scoring 81st percentile in extroversion means you are more extroverted than 81% of the people in the norming group. And for many normative personality tests, these sample populations are often massive, and as a result, more reliable.

The ability to compare one individual’s personality to others is the critical missing piece needed to validate a personality test’s ability to predict anything at all. Normative personality tests, such as the EPP, use percentiles to find correlations between personality traits and job performance. For instance, a consistent correlation has been found between high competitiveness (i.e. people with higher percentiles in the Competitiveness trait) and job success in a sales role. This information can then be used by employers to make an informed hiring decision – when a job candidate takes a normative personality test, the employer can see how the candidate’s competitiveness percentile compares with those who typically excel in a sales role.

DISC assessments lack this predictive ability, and that’s the main reason why they aren’t recommended for pre-employment testing. One of the leading publishers of the DISC even states on their website that the “DiSC is not recommended for pre-employment screening because it does not measure a specific skill, aptitude or factor specific to any position” and that the “DiSC is not a predictive assessment so assumptions should not be made regarding an applicant’s probability of success based solely on their style.”

So while many organizations may find value in this personality test as a tool for improving self-awareness and fostering team communication, this assessment should NOT be used for making hiring decisions.

Exciting Updates in HireSelect®!

Today we’re releasing a whole host of awesome updates in HireSelect, all designed to make your experience even more streamlined and user-friendly. The most obvious change is that we’ve given HireSelect a design facelift, and we’ve also added a number of really great new features that we think you’ll enjoy:

New and Improved Dashboard

We redesigned the Dashboard to give you easy access to the activities you do the most in HireSelect. See the latest testing activity, easily copy links to your test batteries, and quickly find and compare the results for your most recent job postings. You can also view some of the latest updates in HireSelect or schedule a training session with one of our experts.

Dashboard Update

Streamlined Test Administration

Administering tests is now easier than ever on the new Administer Tests tab. Here you can administer tests in two different ways – by using testing links or by scheduling tests manually. You can still administer tests in the same ways you did before, but we’ve restructured the page to make the process more intuitive. To view the new page, head to the Administer Tests tab.

Administer Tests Update

Resume Viewer: A Game-Changing Feature!

The Resume Viewer streamlines the way that you view all of your candidates’ resumes. With this new feature, you can now select one or more resumes and search them for important keywords. From the Resume Viewer, you can view all the relevant information associated with that candidate, including their test scores and the workflow statuses associated with them. You can also download and print the resume, rate the candidate, and add additional notes. And if you have a HireSelect Pro account, you can email the candidate directly from the viewer.

To use the Resume Viewer, go to the main Results page, check the boxes next to the resumes you want to view, and then click the Resume Viewer tab in the blue box on the right.

Resume Viewer Update

New FAQs and Help Section

We’ve just added a new FAQs section where you can find answers to a lot of your questions about HireSelect. Find the FAQs under the new Help tab.

We also created a new page under the Help tab called HireSelect Updates where we’ll regularly add information about any of the new updates we put out in the future. Check back to this section to catch any updates you may have missed!

FAQs Update

We know change can be intimidating, and we’re here to help! As always, feel free to reach out to your Account Manager if you need a quick walk-through of any of the new functionalities in HireSelect.

 

What Our Data Says About the Gender Wage Gap

The wage gap between men and women is well-documented, and there’s much debate about the reasons behind the oft-cited statistic that women are paid 77 cents for every dollar earned by men.  One common explanation for the wage gap is that it is, at least in part, affected by the types of jobs and industries that men and women choose for their careers. So we decided to dig into our own data to find out what jobs men and women were actually applying to the most.

As a pre-employment testing company, hundreds of thousands of job seekers take our pre-employment tests each year. This data provides us with insight into the types of jobs for which people apply. Here’s what we found:

Jobs by Gender Graphic

The two lists have notable similarities and differences. Customer service representatives take the top position for both genders, while retail sales fill the fifth position. Not surprisingly, the list for men skews toward more physically demanding jobs, such as laborers, team assemblers, and maintenance workers.

In contrast, women were more likely to apply for service-oriented jobs such as nursing aides, administrative assistants, tellers, accounting clerks, and office clerks. Men also tended to apply for roles working with computers while women were more likely to apply for organizational, financial, or managerial roles.

Again, none of this is very surprising, and the lists seem to conform to many of the assumptions we anecdotally make about the jobs that men and women choose.

What’s interesting is that when we compute a simple average of the expected national salaries for each list based on data from the Bureau of Labor Statistics, the average salary for the men’s list is $42,897 while the average salary for the women’s list is $35,811.

This calculation is a very rough estimate of the salary potential for the jobs that men and women are applying for. It doesn’t capture the number of jobs available in these positions, nor does it represent the number of people of each gender currently working in these positions. Rather, it represents the average salary for the jobs that men and women apply for the most.

What can be interpreted from this data? If anything, it confirms the idea that there is a wage gap, and that this wage gap may in part be influenced by the jobs that men and women apply for. This has broader societal implications about what types of jobs men and women are encouraged to seek, as well as the monetary value we place on different types of labor.  And none of this should distract us from the fact that there’s abundant evidence that women are paid less than men when they perform the same jobs.

Ultimately, there are likely to be many reasons behind why the wage gap exists, including discrimination, family responsibilities, and access to certain career paths and promotion tracks. Our data reveals that the division of jobs by gender is also a contributing factor; jobs predominantly done by women tend on average to pay less than jobs done predominantly by men. Nevertheless, the wage gap remains a complicated issue and more research is required to discern more of its underlying causes.

To Find the Best Talent, Look Within

Different jobs call for different abilities. A well-known best practice for hiring people is to perform a thorough job requirements analysis that documents which skills and abilities are necessary for the job. But when it comes to discovering exactly which qualities best predict job success for a particular role at your organization, knowing where to start can be a challenge.

Pre-employment tests can help with this process: in fact, one of the best strategies to find the right talent for your team is by first testing your current employees.

The technical term for this process is a local validity study, which is essentially a way to measure how successful a pre-employment test is at predicting success for a particular role in a specific organization. It’s a “local” study because it focuses in on your organization. And because most professionally developed pre-employment tests are already extensively validated, local validity studies serve as an extra layer of validity providing immediate insight into the value of the test for your particular company.

So how exactly does a local validity study work in practice? Let’s imagine you’re hiring sales executives who will be responsible for selling a fairly complex product.  Cognitive aptitude tests and personality tests are a common choice for this type of position, so you administer the two tests to your existing sales executives. Next you would compare your employees’ test scores on both tests with a measure of their job performance to make sure the test scores correlate with the business outcomes you value, and to identify any specific qualities that are most predictive of success.

It’s a pretty simple concept, but there are a few key things to remember when conducting a study like this. First, you need to be able to administer the test to a decent sample of people in order for your findings to have statistical merit. The bigger the sample size the better; you’re unlikely to come up with any statistically significant finding unless the sample is at least 25 people, and preferably more.

Second, if you’re going to be comparing your employees’ test scores with their performance, you have to have in place a way to meaningfully measure performance within your organization. This can include anything from performance ratings to sales numbers, as long as management can agree internally that these performance metrics are accurate. If you can’t trust your performance metrics, you can’t trust the study. 

Speaking of performance metrics, you need to have some range in performance ratings in order to see meaningful results. For example, if your chosen metric is an employee rating out of 5, and every employee received somewhere between 4 and 5, it will be difficult to see any correlation when the range of performance is so narrow. This is a classic example of what statisticians call a range restriction problem.

To get the best results in a local validity study, it’s recommended that you test a wide sample of employees in the position you’re hiring for, not just top performers. At first glance, it makes sense to only test the best employees so that you can directly identify the attributes you want in your candidates. But if you don’t test your mid to low performers, you won’t actually know for sure that your top performers would have scored higher than them on the test. Performing the study with all of your employees in that position gives you a clearer window into the test’s association with job performance.

Your current team is a powerful resource. Harnessing that resource can help you uncover the skills and abilities to search for when growing your team. Administering tests to your current employees before you begin searching for candidates allows you to better understand your team’s strengths and to construct blueprints for future hires.

When Hiring, How Important is Emotional Intelligence?

Emotional intelligence is a hot topic in HR lately and, at face value, it seems like an attribute that every great employee should have. But how do you define and measure emotional intelligence well enough to seek it out in your job candidates?

The answer is not so simple. Much of the ambiguity stems from competing definitions of what emotional intelligence is in the first place. There are two main models of emotional intelligence (EI), one based on abilities and another based on traits.

The ability model posits that people vary in their ability to process and think about emotions, and that this ability can be measured through adaptive behaviors. These behaviors include perceiving, using, understanding, and managing emotions, which this model measures through emotion-based problem solving tasks.

In contrast, the trait-based model measures EI through people’s self-perceived emotional abilities. EI tests that use this model require individuals to self-report their personality/behaviors based on prompts, similar to the way that many established personality tests assess individuals. There’s also a third “mixed” model popularized by Daniel Goleman’s 1995 book Emotional Intelligence, which is a combination of the ability and traits model. While there are pros and cons to each model, there is no general consensus within the scientific community about which one is more accurate.

To complicate things further, the research linking emotional intelligence to job performance shows very mixed results. One meta-analysis of dozens of studies on EI and the workplace concluded that the results so far are inconsistent. Noted psychologist Adam Grant, himself a fan of the new emphasis on emotional intelligence research, recently argued that the evidence does not yet support the use of EI tests to inform hiring decisions. In comparison, tests of cognitive aptitude (or traditional intelligence) are consistently shown to be much more predictive of performance than emotional intelligence.

This is not to say that emotional intelligence isn’t valuable in the workplace. Much of what we perceive EI to be may actually overlap with other more established measures. For instance, some evidence shows that EI may be linked to some traits commonly measured in personality tests, including agreeableness and openness, although the extent of those relationships vary from study to study. What’s more, EI is shown to be positively correlated with cognitive aptitude, suggesting that some components of EI may be encompassed within traditional intelligence.

But many questions still remain: How can we measure EI in a way that is predictive of job performance? What relationship does EI have to cognitive aptitude? What relationship does EI have to personality?

Here at Criteria, we think emotional intelligence is a really exciting frontier for research. While a lot of fascinating work is being done to uncover the link between emotional intelligence and workplace performance, the current research isn’t quite strong enough for us to recommend using it as a factor for making hiring decisions.

So while for now there might not be a well-validated EI test for hiring purposes, there are ways you can approximate emotional intelligence through other more predictive factors. In the meantime, we look forward to seeing what future research has to tell us about emotional intelligence and the workplace.

Structured vs. Unstructured Interviews: The Verdict

For most employers, interviews continue to be a pivotal factor in the hiring process despite mounting evidence that interviews can be incredibly unreliable for predicting job success. One study found that impressions made in the first 10 seconds of an interview could impact the interview’s outcome; another study suggested that employers hire people that they like the most on a personal level; and research has consistently demonstrated that unstructured interviews are one of the worst predictors for job performance.

Despite all this, ditching the interview altogether is probably not a good solution. The reason is the key difference between unstructured and structured interviews. Unstructured interviews lack defined questions and unfold organically through conversation. It’s easily apparent how unstructured interviews can lead to bias when the “success” of the interview is dependent on natural chemistry or common interests.

In contrast, structured interviews consist of defined, standardized questions designed to efficiently determine if the candidate is up for the job at hand. By standardizing the interview process for all candidates, structured interviews minimize bias so that employers can focus on the factors that will have a direct impact on job performance.  At face value, structured interviews are more useful for predicting job performance, and it shows in the data. Structured interviews are almost twice as predictive of job performance as unstructured interviews.

So why aren’t more people exclusively using structured interviews? One of the biggest obstacles may be how difficult it is to actually plan and write a structured interview in the first place. Constructing a format for a structured interview can be time-consuming, requiring careful thought and a little bit of trial and error. Structured interviews can also feel awkward and stiff for candidates.

While establishing a structured interview process may be a challenge, it’s still a worthy goal. The data consistently reaffirms that unstructured interviews are significantly less predictive than structured interviews. Unstructured interviews also increase your chances of introducing more bias into the process. If your goal is to hire the candidates who are most likely to succeed on the job, then structured interviews are the way to go.