Counting the Deer in Princeton
Remarks on Constructive Empiricism
and on Nora Boyd, “Evidence Enriched,” Philosophy
of Science, 85, 2018
Once upon a time, philosophers thought that scientific
theories are collections of statements about the world. The statements have logical connections that
could be studied mathematically by the idealization of formal languages, and
the statements have semantic relations that could be studied mathematically by
the idealization of model theory, supplemented by various accounts of how terms
in the language or mathematical objects in the models relate to things one can
see, hear or touch. Then along came
constructive empiricism, which kept the idealized models but did away entirely
with the formalized language and the logical relations it characterized and
said little about how mathematical objects in the models relate to things one
can see, hear or touch.
Rather belatedly, two difficulties with constructive
empiricism were noticed. The first was, indeed, how the models relate to things
we can see, hear or touch, a matter that is, after all, at the heart of
empiricism. The answer given is so odd that one might have thought the author
was just kidding. The idea is that the theorist has a mathematical data model,
and either that model can be embedded in a model of the theory or it cannot be.
Van Fraassen considers a theory T of the growth of the deer population in
Princeton, and the theorist’s data model, a graph of the variation of the deer
population over time. He writes: “Since this is my representation of the deer population
growth, there is for me no difference
between the question whether T fits the graph and the question whether T fits
the deer population growth” (256). The question of whether the
mathematical model describes the actual deer population (not for me, but in fact) does not arise; it
is not even sensible.
Suppose we ask a scientist how the
curve of deer population growth in Princeton was obtained, and we are told “For
each of several years, I counted the number of hoof marks in Princeton and
divided by 4.’” We advise the scientist that his curve may be a severe
overcount, since the same deer makes many more than 4 hoof marks. The scientist replies that there is no point
to the challenges. If the critics have a
different theory, construct their own data model. Constructive empiricism,
after all.
Suppose
a group of physicists launch a mass spectrometer aboard a satellite to record
ion concentrations above the atmosphere. They fail to calibrate the instrument
before launch, with the result that it returns values in wild disagreement with
previous measurements. (This really happened with the Swedish Freya satellite.)
Would the scientists use the data anyway to try to publish a new estimate of
ion concentrations? Would referees and a journal editor not care? Of course they would care, and what the scientists
actually published was a procedure for calibrating the spectrometer in-flight.
No one who takes science seriously can take seriously this
constructive empiricist account of how data and theory meet. Nora Boyd does. Her essay focuses on
facts familiar to anyone who has read almost any scientific paper: scientific data typically are accompanied by ancillary information that
records the provenance of the measurements: what instruments were used, how
they were calibrated and shielded, what resolutions of space or time or other
variables were obtained, how were the data censored, or clustered or
transformed, what statistical procedures were used, how were the units selected
for measurement or treatment, where and when the
measurements were made, whether the study was blinded or double-blinded, etc.
This sort of information is typically given in the body of scientific reports
or in supplementary material or in documents attached to databanks.
Framing her story as an extension of
Van Fraassen’s, she claims the value of such ancillary information is twofold: it
helps multiple data sources to be used for related problems or
investigations or arguments and it “breaks underdetermination.” I agree it does
the first, but not in a way that is accommodated by constructive empiricism. I doubt
it does the second in any sense except that of allowing further tests of a
theory or theories; if some other theory can account for all of the same
possible evidence—Quine’s sense of underdetermination—combining data sets won’t
distinguish them. But the main thing
such information does is something she ignores, something to which van Fraassen
seems to think there is no point:
it gives assurances
that the measurements have not been made by a process that disqualifies them as
premises in the assessment of a theory or theories because the measurements are
not faithful to the quantities claimed to be measured; and it provides
information to investigate whether such assurances are unwarranted. On constructive empiricist grounds, there is
no point to such assurances and no point to arguments that quantities have been
mismeasured, or to arguments that data treatments destroyed information, or to
objections that in view the provenance of the data the wrong statistical
procedures were used, or that the experimental design leaves open alternative
explanations of the data whose possibility better designs would have eliminated
etc. Boyd misses all of that, perhaps because once science is cast in a
constructive empiricist framework, faithfulness to the phenomena, truth, is not
the point.
Boyd’s suggestion that ancillary information helps in the
proper use of multiple data sets for a question, or the same data set for
multiple problems is of course correct, but it is unintelligible in the
constructive empiricist framework.
And that is the second belatedly noticed problem with constructive
empiricism. On the old-fashioned view, language provides linkages between
models. Language makes the connections that a relation in one model is the same
relation as in another model. As Hans Halvorson points out, there is no such connection
in constructive empiricism, only so many disconnected models, so many monads. A
theory that constrains quantities conditionally, Newtonian dynamics for
example, has many models under different conditions. One would like to say that
the force holding the planets in their orbits is the same as the force acting
on pendula, and indeed Newton says just that. On the constructive empiricist
reconstruction, these are just different models of the theory, and nothing
identifies the property acceleration,
in one model with the property, acceleration, in another. On the old-fashioned philosophy of science that
is one of the services of language. Boyd tell me (private communication) that she does not endorse this part of "constructive empiricism," and she does refer to "minimal empiricism."
Minimal empiricism turns out to be bad wine in new bottles. Citing van Fraassen, she says data are acquired to a theoretical purpose, to support, or not, a particular theory, and data are empirical only with respect to such a purpose. Being empirical for a purpose is just what has been called, since longtime, being relevant to a theory or hypothesis. So what determines that relevance? No answer. If I collect data on the spread of California poppies is that relevant to a hypothesis about the acceleration of the universe? Is it if I say that is its purpose? Of course, there is no theory of relevance in "constructive empiricism" either. If a theory combines dynamics for the universe with dynamics for the spread of poppies, and someone's "data model" for poppies fits into it, is that evidence for the dynamics I postulate for the universe?
Boyd is a new Ph.D from Pitt HPS, and it is not fair to take her to task. Who then? Pitt HPS. They take smart young people and make them, well, without a sense of what it is personally to discover something worth discovering, even the development of an actually new idea. As Pitt HPS goes, so goes philosophy of science in America, pretty much.
Minimal empiricism turns out to be bad wine in new bottles. Citing van Fraassen, she says data are acquired to a theoretical purpose, to support, or not, a particular theory, and data are empirical only with respect to such a purpose. Being empirical for a purpose is just what has been called, since longtime, being relevant to a theory or hypothesis. So what determines that relevance? No answer. If I collect data on the spread of California poppies is that relevant to a hypothesis about the acceleration of the universe? Is it if I say that is its purpose? Of course, there is no theory of relevance in "constructive empiricism" either. If a theory combines dynamics for the universe with dynamics for the spread of poppies, and someone's "data model" for poppies fits into it, is that evidence for the dynamics I postulate for the universe?
Boyd is a new Ph.D from Pitt HPS, and it is not fair to take her to task. Who then? Pitt HPS. They take smart young people and make them, well, without a sense of what it is personally to discover something worth discovering, even the development of an actually new idea. As Pitt HPS goes, so goes philosophy of science in America, pretty much.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.