Will those insights be tested,or simply used to justify the status quo and reinforce prejudices? When I consider the sloppy and self-serving ways that companies use data, I’m often reminded of phrenology, a pseudoscience that was briefly the rage in the nineteenth century. Phrenologists would run their fingers over the patient’s skull, probing for bumps and indentations. Each one, they thought, was linked to personality traits that existed in twenty-seven regions of the brain. Usually the conclusion of the phrenologist jibed with the observations he made. If the patient was morbidly anxious or suffering from alcoholism, the skull probe would usually find bumps and dips that correlated with that observation – which, in turn, bolstered faith in the science of phrenology. Phrenology was a model that relied on pseudoscientific nonsense to make authoritative pronouncements, and for decades it went untested. Big Data can fall into the same trap. Models like the ones that red-lighted Kyle Behm and black-balled foreign medical students and St. George’s can lock people out, even when the “science” inside them is little more than a bundle of untested assumptions.