The Rate of Wrongful Conviction: 1%, 10%,or? – we have a lot of people incarcerated, around 2.5 million. That means a wrongful conviction rate of only 1%, if applicable to all those incarcerated, means we have wrongfully imprisoned 25,000 people [and real criminals are not paying and are free]

14 Jun

MONDAY, JUNE 28, 2010

On The Rate of Wrongful Conviction: Chapter 0.027

In my spare time, I’m preparing a compilation of essays on various estimates of our country’s wrongful conviction rate.  As I draft them, I’ll publish them here. When I’m done with all of them, I will compile them into a single document and make it available on Scribd for free, and on Amazon for a minimal cost.
The chapters will be numbered according to the predicted wrongful conviction rate, in percent. I will begin with the lowest estimate and work my way up to the highest. Keep in mind that we have a lot of people incarcerated, around 2.5 million. That means a wrongful conviction rate of only 1%, if applicable to all those incarcerated, means we have wrongfully imprisoned 25,000 people. A wrongful conviction rate of 10% means we have wrongfully imprisoned a quarter of a million people.
It should be interesting to see what the various studies have to say. Let’s get started.
CHAPTER 0.027
THE SCALIA NUMBER
Joshua Marquis is the district attorney of Clatsop County, the county residing on the mouth of the Columbia River, at the northwest corner of Oregon. His article “The Innocent and the Shammed” appeared in the January 26, 2006 issue of The New York Times. He therein presented his estimate of our country’s wrongful conviction rate.

In the Winter 2005 Journal of Criminal Law and Criminology, a group led by Samuel Gross, a law professor at the University of Michigan, published an exhaustive study of exonerations around the country from 1989 to 2003 in cases ranging from robbery to capital murder. They were able to document only 340 inmates who were eventually freed. (They counted cases where defendants were retried after an initial conviction and subsequently found not guilty as “exonerations.”) Yet, despite the relatively small number his research came up with, Mr. Gross says he is certain that far more innocents languish undiscovered in prison.

So, let’s give the professor the benefit of the doubt: let’s assume that he understated the number of innocents by roughly a factor of 10, that instead of 340 there were 4,000 people in prison who weren’t involved in the crime in any way. During that same 15 years, there were more than 15 million felony convictions across the country. That would make the error rate .027 percent — or, to put it another way, a success rate of 99.973 percent.

<<>>
Five months later, Justice Antonin Scalia published a concurring opinion in the case of Kansas v. Marsh. Scalia chided those who dissented, led by Justice David Souter, for suggesting that innocent people may have already been executed in the United States.
It should be noted at the outset that the dissent does not discuss a single case — not one — in which it is clear that a person was executed for a crime he did not commit. If such an event had occurred in recent years, we would not have to hunt for it; the innocent’s name would be shouted from the rooftops by the abolition lobby.
Souter’s dissent, however, did mention the study by Samuel Gross, mentioned above. Scalia dismissed that study by quoting directly from The New York Times article.

Of course, even with its distorted concept of what constitutes “exoneration,” the claims of the Gross article are fairly modest: Between 1989 and 2003, the authors identify 340 “exonerations” nationwide — not just for capital cases, mind you, nor even just for murder convictions, but for various felonies. Joshua Marquis, a district attorney in Oregon, recently responded to this article as follows:

“[L]et’s give the professor the benefit of the doubt: let’s assume that he understated the number of innocents by roughly a factor of 10, that instead of 340 there were 4,000 people in prison who weren’t involved in the crime in any way. During that same 15 years, there were more than 15 million felony convictions across the country. That would make the error rate .027 percent—or, to put it another way, a success rate of 99.973 percent.”

Scalia then adopts the 0.027% error rate as fact.
The proof of the pudding, of course, is that as far as anyone can determine (and many are looking), none of cases included in the .027% error rate for American verdicts involved a capital defendant erroneously executed.
Ironically, Scalia had earlier in his opinion berated Souter and the other dissenters for parroting news articles without critical review.
Of course even in identifying exonerees, the dissent is willing to accept anybody’s say-so. It engages in no critical review, but merely parrots articles or reports that support its attack on the American criminal justice system.
<<>>
Joshua Marquis made an elementary but critical mistake in his calculation. He divided his estimate of all those who might be exonerated by his estimate of all felony convictions. He should have instead divided by all felony convictions in which exoneration is reasonably possible.Most felony convictions, for example, are for crimes such as burglary, assault, and drugs. Such crimes are frequently devoid of DNA evidence and typically result in sentences of less than ten years. Since DNA is the most powerful evidence of actual innocence, and since the average time from conviction to exoneration is ten years, people convicted of the lesser felonies are seldom exonerated, for reasons having nothing to do with guilt or innocence.

Rape and murder cases, on the other hand, constitute less than two percent of all felony convictions but represent ninety-six percent of all known exonerations. (See Samuel Gross’ rebuttal to Scalia’s opinion “Souter Passant, Scalia Rampant: Combat in the Marsh.”)

If Joshua Marquis were to correct his calculation from

(10 x 340) / (15,000,000)

to

(10 x 340) / (15,000,000 x 0.02 / 0.96)

as I believe he should, then his estimate would rise to 1.1%.

Marquis’ corrected estimate would still, however, depend entirely on the arbitrary multiplier he selected in the numerator. He simply assumed the Gross study had identified 10% of all the people wrongfully convicted. Had he assumed instead that the Gross study had identified all 100% of those wrongfully convicted, his corrected estimate would be 0.11%.

On the other hand, had Joshua Marquis assumed the Gross study identified only 1% of the factually innocent, a number which seems as reasonable to me as 10%, then his corrected estimate would indicate we wrongfully convict, by trial or plea bargain, 11% of all those people we charge with felonies.

<<>>
Despite multiplying some numbers together and then dividing by another, the Scalia number is no more than a guess by a single prosecutor, a guess soon adopted by a supreme court justice who demonstrates no aptitude for the simplest of applied mathematics.
Chapter 0.027: The Scalia Number
Chapter 0.5: The Huff Number
Chapter 0.8: The Prosecutor Number
Chapter 1.0: The Rosenbaum Number
Chapter 1.3: The Police Number

Chapter 1.4: The Poveda Number
Chapter 1.9: The Judge Number
Chapter 2.3: The Gross Number
Chapter 3.3: The Risinger Number
Chapter 5.4: The Defense Number
Chapter 9.5: The Inmate Number
Chapter 10.1: A Skeptical Juror Number
Chapter 11.1: A Skeptical Juror Number
Chapter 11.4: The Common Man Number

Unfortunately, the chapter numbering goes backwards a wee bit. My error once again. (I almost typed a lame excuse, but I’ll just get on with it.) I write this post assuming people have read Chapter 11.1, my estimate based on judge-jury agreement data. In the monograph, I’ll have to restructure everything.
Chapter 10.6
The Spencer Number

I was inspired to pursue this quantification effort by a paper entitled “Estimating the Accuracy of Jury Verdicts.”  It was written in April of 2006 and modified one year later by Bruce Spencer.

Bruce D. Spencer is a Professor of Statistics at Northwestern University. That’s interesting in that Northwestern University is the home of David Protess and the Medill Innocence Project. Those folks have actually helped free innocent people from wrongful imprisonment, and for that I tip my hat. Their name may ring a bell to those of you who have read my posts on the Hank Skinner case.

I consider Bruce Spencer to be the father progenitor of modern day wrongful conviction estimation. It’s a title for which many strive but only one can hold. Everyone before Spencer guessed, surveyed other people who guessed, surveyed people behind bars, divided exonerations by convictions, or just gave up. Spencer did none of these things. He realized that the rate of wrongful conviction was just one piece of valuable information that could be gleaned from judge-jury agreement data.
Despite the lofty title I just bestowed upon him, and despite the shocking implications of his paper, Spencer’s work regarding the rate of wrongful convictions has generally been overlooked by both press and public. I believe there are two reasons for that. Reason number one: Spencer’s an egghead and he writes accordingly. First we’ll look at the Urban Dictionary for its definition of an egghead.
1. A person who is considered intellectually gifted in the field of academics. “Egghead” is usually used as college-speak to describe a brainiac.
2. A person’s whose head is shaped like an egg. Most people however, will use this word interchangeably as a pun. It has also been known that people whose heads are shaped like an egg are usually large at the top, which explains the larger brain-size.

Next, we’ll look at just two contiguous sentences from Spencer’s paper:

In Section III, an estimator of jury accuracy is developed that has three components of error, survey error from estimating the agreement rate, specification error arising because differential accuracy between judge and jury is not observed and the dependence between judge and jury verdicts is not known, and identification error arising because we cannot distinguish correct agreement from incorrect agreement. The specification error will be one sided, leading to overestimates of jury accuracy, provided that two conditions hold: (i) errors in the judge’s and jury’s verdicts for a case are either statistically independent or positively dependent, and (ii) the judges’ verdicts are no less accurate on average than the juries’, even though for individual cases the judge’s verdict may be incorrect when the jury’s verdict is correct.

There you go.

The second reason that Spencer’s work hasn’t received the attention I think it deserves is because Bruce D. Spencer is a Professor of Statistics and he doesn’t trust the randomness or sample size of  his source data any further than he can throw it. Every time he provides a shocking number, he leads it or follows it with a warning that his numbers should not be used by Joe Q. Public. Below, I provide examples of the caution he sprinkles liberally throughout his paper.

The jury verdict was estimated to be accurate in no more than 87% of the NCSC cases (which, however, should not be regarded as a representative sample with respect to jury accuracy).

Caveat: the NCSC cases were not chosen with equal probabilities as a random sample, and the estimates of accuracy should not be generalized to the full caseload in the four jurisdictions let alone to other jurisdictions.

The analysis suggests, subject to limits of sample size and possible modeling error, …

The unequal sampling rates imply that the results for the NCSC sample cases should be weighted if they are to generalize to the full caseload in the four jurisdictions. No such weighting is employed in the present analysis, and the statistical inferences do not extend outside the cases in the NCSC study.

In light of these limitations, the empirical estimates from the data analysis must be interpreted with great caution and in no event should be generalized beyond the NCSC study.

The estimates are no basis for action other than future studies.

Assuming you can work through the writing and the math (and there is some substantial math), you’ll go through a series of “Wow! Never mind” moments. But if you finally get through it (after about a couple dozen tries in which you still can’t work all the way through the stupid math and that makes you kinda discouraged so you just say “screw it”) you might be inspired to try something on your own.
<<>>

Here’s where Spencer started.

The table summarizes the results of 290 criminal jury trials surveyed by the National Center for State Courts (NCSC) during the period 2000-2001. In each of the trials, the judge recorded the verdict he or she would have rendered had it been a bench trial. The table shows that the judge and jury agreed Guilty was the proper verdict in 64.1% of the trials. In 12.8% of the trials, the judge and jury agreed that Not Guilty was the proper verdict. Overall, the judge and jury agreed in 76.9% of the cases. They disagreed only 23.1% of the time.
The table gave Spencer three independent inputs. There are four squares, four pieces of information, but only three of them are independent. The fourth one, whichever you choose, must be set such that the sum of the four squares equals 100%.
Spencer needed to solve for five output values: The rate of wrongful conviction for both judge and jury, the rate of “wrongful acquittal” for both judge and jury, and the fraction of defendants who were actually innocent or actually guilty. Spencer couldn’t solve for five variables when he had only three inputs. He couldn’t do it and nobody else can. It’s not Spencer’s fault. It’s just mathematically impossible. Spencer needed more input, and the NCSC study had more to give.
In addition to providing judge-jury results for 290 trials, the judges and jurors  were asked to rate the evidence between 1 (evidence strongly favored prosecution) and 7 (evidence strongly favored defense.)  Including that strength-of-evidence information in his analysis, Spencer arrived at the following results,which I have simplified for ease of understanding.
The table shows that the jury convicts a factually innocent person in 5.4% of the trials, and the judge (based on his or her vote) would convict an actually innocent person in 10.5% of the trials. Those are not, however, quite the numbers we are looking for. We want to know the number of wrongful convictions per conviction, not per trial. To arrive at that number from the table, we would divide the percentage of wrongful convictions by the percentage of convictions. In the case of the jury, that’s .054 / .689 = .078 = 7.8%. The corresponding number for judges is 12.9%.Another shocking number from the table is the probability of an actually innocent person being convicted. Spencer’s analysis indicates that 27% of the defendants are actually innocent of the crime for which they are charged. If those innocents face a jury, they have a 20% chance of being convicted. ( .054 / .27 = .20 ) That’s bad enough. If those innocents instead elect for a bench trial, they have a 39% chance of being convicted. ( .105 / .27 = .39 )

Similarly, you can calculate the rate of “wrongful acquittal” from the table. I put the term in parenthesis because it is not necessarily an error to acquit a person who is actually guilty. If the State did not prove its case beyond a reasonable doubt, then the error would be in voting guilty. When I use the term “wrongful acquittal” with the quotes, I am indicating only that the person was acquitted despite being factually guilty, not that the jury necessarily made an error.

The “wrongful acquittal” rate for the jury, based on Spencer’s analysis of the NCSC judge-jury agreement data, is 30.5%. ( .095 / .311 = .305 ) Whereas the judge is almost twice as likely to convict an innocent person, the judge is only one-third as likely to acquit a guilty person. ( .022 / .187 = .118 = 11.8% )

<<>>

There are so many numbers floating around, and so many numbers that could be made to float, that we need a way to simplify everything. That’s why each chapter in this monograph is defined by a single number, the rate of wrongful conviction for jury and bench trials combined. That rate can then be multiplied by the number of people incarcerated to determine the number of people wrongfully incarcerated.

To arrive at that single number, we need to account for the number of jury trials compared to the number of bench trials. I have summarized those calculations in the three tables below. One table is for the juries, one for the judges, and one for jury and bench trials combined. Each table contains three estimates. Spencer actually provided estimates for a variety calculation assumptions, but recommended only two be considered valid. Those are labeled 3a and 3b. I’ve also included the results from my own judge-jury agreement analysis, which I present in Chapter 11.1.

There’s a whole lot of data there, so you’ll have to click on the image to enlarge it and view it. Don’t be intimidated. I’ve marked up the figure to allow you to quickly home in on what’s important.

The numbers in bold are the basic results from the judge-jury analyses.

The numbers underlined (near the upper left) are state court conviction data for 2004 from the Sourcebook for Criminal Justice Statistics Online. They are, of course, identical for each of the three analyses within each table. I’m merely seeing what would happen if I applied the results from the judge-jury agreement analyses to real world data.

The other numbers are merely Excel level calculations. The wrongful conviction and wrongful acquittal rates are inside the heavily outlined boxes in the bottom table. Spencer has estimated two different wrongful conviction rates. I took the average of his two results to use as the chapter number.

I’m ecstatic to see that my calculated wrongful conviction rate matches that of Professor Bruce D. Spencer to the first decimal point, assuming I choose to accept his second analysis as correct. The match is interesting since we used two different sets of input data, and two dramatically different approaches for defining and solving our equations. I’m quite frankly stunned.

Spencer and I don’t agree near as well when it comes to the rate of wrongful acquittal. My calculated rate is nearly 50% higher than his. However, since acquittals are far fewer than are convictions, in absolute numbers, both Professor Spencer and I are once again in near agreement. (Notice how I’ve elevated him once again from Spencer to Professor Spencer now that I see he agrees with me.)

The number “n” along the right hand side of the tables is the ratio of guilty men set free to innocent men convicted. In all cases, it’s close to unity. In no case is it close to ten. Ten is the number made famous by the long dead English jurist William Blackstone who proclaimed that it is “better that ten guilty persons escape than that one innocent suffer.”

My choice of using “n” as the symbol for that value comes from a clever and fascinating article by Alexander Volokh entitled “n Guilty Men.” Volokh therein presents an amazing and comprehensive history of various pronouncements of what a proper value of “n” should be, ranging from a high of infinity to a low of 0.1.

Any suggestion, however, that we can better protect the innocent among us (or ourselves for that matter) only by allowing more guilty people to escape is based on a false premise. There is nothing in the mathematics that says it must be so.

We could, if we wished, improve our law enforcement system to identify and convict a higher percentage of those who are in fact guilty and not even bring to trial those who are in fact innocent. An state induced wrongful eyewitness identification, for example, can allow both the escape of a guilty person and the conviction of an innocent. Application of improved arson science could spare thousands of innocents and let not a single guilty person go free, since no crime may have been committed.

We need to learn the lesson so frequently reinforced upon those who attempt to excel at business: poor quality is extremely costly and can be deadly.

[Note to self. The closing paragraph really sucks. Need to fix it. Also, need to discuss actual innocence versus legal innocence.]

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: