I’ve been following and doing my best to join the conversation taking place at Simple Justice, by Scott Greenfield for about a year. Yesterday my wife had to do her best to contain my ego when one of my comments from last week broke the post barrier and became the topic of conversation. For those of you wondering why this is exciting, please understand: Though SJ is a public blog with an open commenting policy, it doesn’t take someone very long to figure out that Scott doesn’t suffer fools, and his bar for the foolish is pretty low.
But I digress. Last week Scott wrote a piece about confusion around sentencing guidelines. This story, and the ensuing conversation involving real judges, is further evidence of what a growing segment of the America public is slowly beginning to understand: If fairness and objectivity are important metrics by which we measure our legal system, our legal system is not doing so well. But Tron Carter has known this for years.
The specific problem being discussed now is variation in sentencing caused by ignorance, subjectivity, or both. My solution? Automate it (of course). With sufficient investment a computer with 1/10th the horsepower used to game the stock market would remove all the subjectivity and lack of data from the equation. The difficult part would be creating an algorithm which can handle all the mitigating factors on either side of the argument but the underlying technology already exists. This is largely a configuration problem.
However, Simple Justice is not in favor of computers handling sentencing. That’s fair, though I don’t feel as though Scott’s made a compelling argument as to why. So far I’m seeing a lot of ‘technology bad, people good’, which is frustrating because I happen to know Scott’s not a Luddite, and he certainly does not believe people are infallible. However, he does get to a couple of points down at the bottom of the post which add up to this:
- GIGO – Who decides how this thing is programmed? – Fair enough…But certainly this the heart of democratic process. Who decides how the system is programmed in meat-space? People. That’s not going to change whether a judge or a sentence-o-matic 1000 handles the sentencing. It’s just that the sentence-o-matic 1000 will REMOVE the lack of objective data and bias.
- We (criminal defense lawyers) need subjectivity in sentencing because every guilty defendant is a special snowflake and we need the ability to argue for leniency. Oh, now I get it, subjectivity is a feature, not a bug.
It’s at this point another commentor leaves, what I think, is the most important question:
Why is consistency the goal? If judges are supposed to weigh that which cannot be measured, disparities in outcomes are to be expected.
A sentence-o-matic 1000 containing nothing more than a suitably tuned random number generator might pass the judicial equivalent of the Turing test: An observer could not discern whether the sentence came from a judge or the machine.
Since that approach is unlikely to please most people, perhaps the real requirement is not consistency, but that judges compose a plausible story about how they choose each sentence.
Consistency is not a goal in itself, but a proxy to show that sentencing is not a product of judicial whims. If the same defendant, under the same circumstances, was sentenced by ten different judges, the sentence should (within I suppose some reasonable parameters) be the same. If not, then sentencing would be arbitrary, and that would undermine the integrity of the process.
Since one defendant can’t be sentenced by ten judges, the system instead looks to ten defendant, using criteria that the government believes makes them comparable, to see if the sentences are consistent, to validate the methodology of sentencing.
To which I say: Great! Because technology could monitor this as well. Or perhaps that already exists?