A lot of expertise is bullshit. Especially lawyers; on listening to cass sunstein.
The conference was, overall, pretty good. The invited speakers were mostly academic AI and statistics practitioners. And, as with most actual practitioners the hype factor was low. Cynthia Rudin, a interpretable AI computer scientists at Duke, is one interesting example. She has worked to show that their is no trade off between model interpret-ability and accuracy in the vast majority of use cases. A model produced using one of her methods (certifiably optimal rules list) and a widely used criminal justice black box called compass. Her model easily fits, handwritten, on a postcard. It mostly uses age and number of prior offenses to predict the likelihood of re-offending. Compass, by contrast, is proprietary and uses "130 factors" in it's decisions. It is an intentionally "black boxed" model--in that it is hidden from view because of "trade secrets", but the result is the same for models that because of their complexity, cannot be interrogated. We can't know why a decision was made--we can guess, or estimate why, but we can't really know.
Black boxes make for excellent investor pitches. In the language of VC culture, they create moats around your business that are harder to compete with. Never-mind that most state of art machine learning is open source and essentially fungible. The marketing pitch is more important than the reality, because the goal is less to revolutionize society, than it is to get rich enough to leave society behind. In any event many of the AI practitioners pointed at the ways in which these models are oversold. Or, described approaches that mitigate some of the potential harms, or that demonstrate that, for a given value (such as not discriminating) there are inevitable trade offs with things like predictive accuracy. Much of this insider critical work is obviously small bore. But some does at least grapple with the details in ways that are useful.
Enter Cass, an undeniable heavyweight of regurgitating behavioral science pablum for consumption by lawyers. The point of his talk was, I think, to emphasize that, from a decision-making perspective algorithms are "noise free". By this, I take it to mean that, unlike humans, algorithms give you the same decision for a given input. This fairly obvious fact, really a truism about how computers do in fact work, is taken as proof that computer decision making is better without any additional analysis. The supposedly illuminating example of bail algorithms is the main medium for this argument. The speech took the complications of such an endeavor as read (for example, do arrests reflect criminal behavior or police behavior?) and briskly moved to celebrating the removal of humans from a process because they aren't consistent. Cash bail as an institution, not worth interrogating.
There are a lot of things to dislike about this kind of glib but ultimately baleful snake oil intellectualism. But what irked me particularly was the laziness. The central idea of the talk was hardly related--indeed the main relationship it had to the proceedings was that the keynote speaker is currently working on a book in that vein. The idea that human decisions are noisy is implicit in doing statistics at all. And the insights hardly seems meaningful in the context of models that, like some of the recent language models produced by google, have a trillion parameters. Given that the input space for many of these models are also infinite (like written language) or nearly infinite (images), that models are regularly refined, adjusted, or retrained, the insistence on the noiselessness of computer decision-making seems just plain clueless. In models where small perturbations in the input data can result in big changes in predictions, changes that are not easily understood or explained, the fact that if you use the same exact data you will get the same again result next time hardly seems comforting. This fact, that you can't explain it or interpret it, and therefore can't understand it is central to the problem of inhumane decision-making.The noise is in the model.
- Next: Recent Notes
- Previous: Recent notes