Last summer we reacted to an interview with Laszlo Bock at Google who seemed to say that tests scores and grades were useless predictors for hiring decisions. We said that what constitutes information for hiring purposes at Google may well differ from what constitutes information for hiring elsewhere, and we pointed out that validating a selection tool after it has been used, and only for those who were selected will typically yield lower estimates of the usefulness of that tool.
If you’re used to Google upending conventional wisdom, then yesterday’s interview with Laszlo Bock in the New York Times did not disappoint. Google has determined that test scores and transcripts are useless because they don’t predict performance among its employees. Since Google knows where the flu is breaking out, and who is a good prospective customer, and even exactly how we want our inboxes arranged, it’s understandable that there appears to be a lot of head nodding going on with people saying, “That’s so true. I always knew those tests and college grades were worthless.” Google is obviously a fabulously successful company and an incredible engine of innovation. It’s such a great place to work they made a movie about how great a place to work it is. So I’m going to assume they’re well aware of the limits of their claim, and instead I’m going say that as readers of the interview we should not lose sight of a fundamental fact -
During grad school I taught for the Johns Hopkins Center for Talented Youth which offers advanced summer classes to 7th and 8th grade students who score above the national average on the SAT. In a short three week session, these youngsters gobbled up the Harvard undergraduate Intro to Psych course. It was fun to work with such bright minds, and I often wonder what became of the students I met.
Everyone wants to compare themselves to Netflix, whose data-driven, personally tailored movie suggestions improve customer satisfaction and retention. Among the latest domains to see this trend: “learning analytics” in higher education. The basic idea is to use institutional data to help students successfully navigate towards their college degrees. Doesn’t sound controversial yet – data-driven decision making is usually just plain common sense.
Some newspapers and radio stations recently picked up a story that Facebook profiles can be revealing, and can yield information more predictive of job performance than typical self-report personality questionnaires or even an IQ test.
Okay, first an upfront explanation of why we are even blogging about this. We are a web-based business that generates vast amounts of data. We continuously monitor and analyze our data, and even sometimes blog about what we find. So when we saw that a blog from OKCupid was the source of headlines such as, "The Curse of Being Cute" we had to see what they had done with their data. They did this.
We made a few posts last year about the NFL and whether or not draft order is related to productivity. The core issue for us was a claim Malcolm Gladwell repeatedly asserted that the draft order of NFL quarterbacks (QBs) is unrelated to performance. Well, the issue was raised again over the Labor Day weekend and we were alerted to some more recent material we hadn't seen because to be honest we thought we were done with the whole thing. We found this very sensible WSJ blog from last December, but then we also found this CNBC blog from May of this year. Darren Rovell, the CNBC blogger, reproduced the following table from economist Dave Berri. It purports to show that performance of lower drafted QBs is similar to that of the top drafted QBs. Now to be fair, the table was used to argue that the cost-benefit of the lower picks might exceed that of the higher picks and that is entirely plausible. But Berri also uses a table like this to argue that draft order is not a good predictor of success.
I'll admit I'm in a curmudgeonly mood because I feel like I'm wasting time writing about something so obvious. But we've been implicated in a strange argument that erupted in the blogosphere last week, so I'm compelled to write a few words to clear our name. As we mentioned in our last post, a few days ago Steven Pinker reviewed Malcolm Gladwell's latest book and criticized him rather harshly for several shortcomings. Gladwell appears to have made things worse for himself in a letter to the editor of the NYT by defending a manifestly weak claim from one of his essays – the claim that NFL quarterback performance is unrelated to the order they were drafted out of college. The reason we're implicated is that Pinker identified an earlier blog post of ours as one of three sources he used to challenge Gladwell (yay us!). But Gladwell either misrepresented or misunderstood our post in his response, and admonishes Pinker by saying "we should agree that our differences owe less to what can be found in the scientific literature than they do to what can be found on Google."
Last week the New York Times published an article on a possible Obama effect on test scores of black test takers. It was unusual for a major newspaper to publish a story on a social science study before that study has been published, let alone reviewed. But when you hear that so-and-so reported their results at some national conference, that isn't really peer reviewed either. The conference organizers have often only seen a 200 word description of what the researchers thought they would present. So although unusual, it's not entirely out of line to try to get the first step on a story like this, and the Times did circulate the study to some academics to get professional opinions.