This Just In – Google discovers that GPAs are useless

If you’re used to Google upending conventional wisdom, then yesterday’s interview with Laszlo Bock in the New York Times did not disappoint. Google has determined that test scores and transcripts are useless because they don’t predict performance among its employees. Since Google knows where the flu is breaking out, and who is a good prospective customer, and even exactly how we want our inboxes arranged, it’s understandable that there appears to be a lot of head nodding going on with people saying, “That’s so true. I always knew those tests and college grades were worthless.”  Google is obviously a fabulously successful company and an incredible engine of innovation. It’s such a great place to work they made a movie about how great a place to work it is.  So I’m going to assume they’re well aware of the limits of their claim, and instead I’m going say that as readers of the interview we should not lose sight of a fundamental fact –

Across a wide variety of employment settings, one of the most robust findings in organizational psychology is that tests of cognitive ability are strong predictors of job performance. If Google has found otherwise, what they have found is that grades and test scores are not predictive of performance at Google. In general, in the workplace tests are still highly predictive of success.

There are at least two factors in play here (and again I’m assuming the folks at Google are well aware of both of these points). First, when a company has built its brand on attracting only the brightest prospective employees, through self-selection and through sheer volume of applicants the pool will be extremely competitive. Google likely doesn’t have much variability among those hired with respect to test scores and grades. And when there is no variability, there is no correlation with anything. It’s a similar argument to MIT saying that the SAT is useless for their admissions. Their applicant pool is so vast and highly qualified that the incoming class is largely homogeneous on those measures.

The second point is a bit more subtle. We of course need to wonder how all those people with lower test scores and grades would have fared at Google had they been hired. But furthermore, when an organization has used a certain instrument to select their employees, the correlation with job performance goes down.  It’s not just that the range has been reduced (through selection). It’s also that the information has already been acted upon. If someone is hired despite their lower test scores, it usually means some compelling compensating characteristics made that person look like a good bet. That is why the correlation between a valid selection instrument and job performance can be dramatically depressed when only looking at the hired sample. It’s common to misinterpret that low correlation as a sign of poor prediction. Again – thousands of research studies have confirmed the predictive validity of tests of cognitive ability for job performance. Google may well say, “Not here.” – but they cannot (and did not) say, “Not anywhere.”

There were some valuable insights in that interview. I especially liked the trick of asking candidates to describe a successful problem solving experience. Of course, you have to be sure you have reliable interviewers who will be systematic and calibrated in scoring the responses.  And you have to be prepared that that question may become the new, “Why are manhole covers round?” – a question that will receive scripted responses by the next generation of applicants to Google.