A Minor Correction to Richard Posner

dice

I quoted the following passage from Richard Posner  in my recent article on Scotland’s three-verdict system:

When . . . judges and juries are asked to translate the requisite confidence into percentage terms or betting odds, they sometimes come up with ridiculously low figures-in one survey, as low as 76 percent, see United States v. Fatico, 458 F. Supp. 388, 410 (E.D.N.Y. 1978); in another, as low as 50 percent, see McCauliff, Burdens of Proof: Degrees of Belief, Quanta of Evidence, or Constitutional Guarantees?, 35 Vand. L. Rev. 1293, 1325 (1982) (tab. 2). The higher of these two figures implies that, in the absence of screening by the prosecutor’s office, of every 100 defendants who were convicted 24 (on average) might well be innocent.

See if you can spot the error in this reasoning.

I’m not entirely sure what Posner means by “screening by the prosecutor’s office” but it is not true that a jury system that convicts everyone whose guilt they are at least 76 percent sure of will convict an average of 24 innocent defendants for every hundred convictions. The marginal convict will have a 24 percent chance of innocence, but the average convict will generally have a lower chance of innocence than that.

For instance, suppose that there are only two convicts, one who was caught red handed (100 percent chance of guilt), another who was 76 percent likely to be guilty. Then the average chance of a convict being guilty is (76+100)/2=88 percent.

It’s only in the case where all convicts have a 76 percent chance of being guilty that there will be an average of 24 innocent convicts out of every 100. That’s the extreme lower bound of all possible distributions of probable guilt. In all likelihood, there will be fewer than 24 innocent convicts out of every 100, even if jurors convict everyone with a 76-percent-or-higher chance of guilt.

The post A Minor Correction to Richard Posner appeared first on The Economics Detective.

A Minor Correction to Richard Posner

I quoted the following passage from Richard Posner in my recent article on Scotland’s three-verdict system:

When . . . judges and juries are asked to translate the requisite confidence into percentage terms or betting odds, they sometimes come up with ridiculously low figures-in one survey, as low as 76 percent, see United States v. Fatico, 458 F. Supp. 388, 410 (E.D.N.Y. 1978); in another, as low as 50 percent, see McCauliff, Burdens of Proof: Degrees of Belief, Quanta of Evidence, or Constitutional Guarantees?, 35 Vand. L. Rev. 1293, 1325 (1982) (tab. 2). The higher of these two figures implies that, in the absence of screening by the prosecutor’s office, of every 100 defendants who were convicted 24 (on average) might well be innocent.

See if you can spot the error in this reasoning. (more…)

The post A Minor Correction to Richard Posner appeared first on The Economics Detective.

Significance Tests as Leading Questions

Under the common law, lawyers are not allowed to ask witnesses “leading questions,” as witnesses can be influenced by the way questions are asked. A leading question is one that suggests a particular answer, for instance, “Were you at the country club on Saturday night?” is a leading question, while, “Where were you on Saturday night?” is not.

Econometricians should be as careful as lawyers when questioning the most unreliable of all witnesses: economic data. Most statistical software will automatically spit out t-tests for whether the coefficients in regression models equal zero. This is equivalent to asking the data, “Data, given these modelling assumptions, can you deny with 95% certainty that this coefficient equals zero?” That’s a leading question, and the econometrician shouldn’t ask it unless he has special reason to suspect that the coefficient is zero.

For example, suppose an economist was attempting to test the effect on employment of an increase in the minimum wage (I choose this example only because I am familiar with it). If he observes many people working below the new minimum immediately before it goes into effect, he can believe with high certainty that the new minimum will be binding. Furthermore, if he observes many businesses employing low-skilled workers, as well as a stream of new businesses entering the market for low-skilled labour, he can believe with high certainty that the market for low-skilled labour is competitive rather than monopsonistic. Putting on his economist hat, he can infer from these two observations that the reduction in employment caused by the minimum wage will correspond to the elasticity of the demand curve for low-skilled labour.

Given this situation, would it be appropriate for this economist to ask the data, “Data, given these modelling assumptions, can you deny with 95% certainty that the minimum wage has zero effect on employment?” I hope the reader can see the problem with such a question. The economist has no special reason to believe that the demand curve for low-skilled labour is perfectly inelastic, any more than he has a special reason to believe that this demand curve has an elasticity of exactly 0.73. The question he should be asking is, in the case of this particular historical event, how much did the increase in the minimum wage increase unemployment? “Not at all” is a valid answer, but with no special reason to believe it is the correct answer, he should not bias his conclusion by phrasing the question in such a way that he leads his “witness” to favour zero.

The post Significance Tests as Leading Questions appeared first on The Economics Detective.