This issue of Philosophy
of Science contains some good, some bad, some odd. It gives evidence that
methodology in philosophy of science is pretty much in the doldrums or worse,
while good work is being done producing economic models for various ends.
Brian Skyrms, Grades
of Inductive Skepticism
Reject.
This is a very brief rehash of some history of probability,
coupled with some remarks on ergodic probabilities, remarks that go nowhere.
The piece seems oddly trivial and
unworthy of its distinguished author.
One has to wonder why it was published—or submitted. Hypothesis: The author is eminent and a
colleague of the editors. That sort of thing has happened before in Philosophy of Science, although not that
I can think of under the current editors.
But one of the things colleagues should do for one another is discourage
the publication of stuff that is trivial or bad in other ways.
Ben Jantzen,
Accept.
Likelihood has an apparent problem. Suppose you are weighing
hypotheses h1 and h2. You know b. You learn e. Should you compare h1 and h2 by
p(e | h1, b) / p(e | h2, b)
or by p(e, b | h1) / p(e, b | h2)?
Which hypothesis is preferred may not always be the same on
the two comparisons. Jantzen makes the sensible suggestion that which to use
depends on whether you are asking about the extra support e gives to h1 versus
h2 in a context in which b is known, or whether you are asking about the total
support. Jantzen’s point is not subtle,
but the paper is well done and the examples (especially about fishing with nets
with holes too large) are illuminating.
Which reminds me of a deeper problem with likelihood ideas
that seem not to be much explored: likelihood
doctrine seems to imply instrumentalism.
Likelihood arguments are used not just to compare hypotheses
but to endorse hypotheses, e.g., via maximum likelihood inference. Consider two principles:
1.
Hypotheses addressing a body of data should be
preferred according to the likelihood they give to that data.
2.
A hypothesis should not be endorsed if it is
known that there are other hypotheses that are preferred or indifferent to it
by criterion 1 above, especially not if there is a method to find such
alternatives .
If the data is finite, the hypothesis just stating the
evidence has maximum likelihood. So some
additional principle is required if likelihood methodology is to yield anything
more than data reports. The hypothesis space
must somehow be restricted.
Try this:
3.
Only hypotheses that make predictions beyond the
data are to be
considered.
So suppose there are data e1…en and consider some new
experiment or observation e not in the data but for which “serious” hypotheses explaining
e1…en gives some probability to the outcomes. Let the outcomes be binary for
simplicity and so h gives the probability to be is P(e | h). Consider the hypotheses:
e1&…&en & argmax<h,> (P(e | h)
if argmax<h,> (P(e | h) > argmax<h,> (P(~e
| h) ,and e1&…&en & argmax<h,> (P(~e | h)
otherwise
This hypothesis meets condition 3 and gives e (or ~e) a
likelihood at least as great as any alternative hypothesis.
Ok, try this:
4. Only hypotheses that make an
infinity of predictions are to be considered.
But the stupid pet trick above can be done infinitely many
times. So try this
5. The hypotheses must be finitely
axiomatizable.
Still won’t do, as (I
think) an easy adaptation of) the proof in http://www.jstor.org/stable/41427286
shows.
Lina Jansson
Reject
Both the thesis and the argument of this paper are either
opaque or weird; it is difficult to see the warrant for publishing. Her stalking horses are “causal accounts of
explanation.” On Streven’s account,
causal asymmetry is why X explains Y rather than the other way round—Dan
Hausman had that idea earlier; on Woodward’s account, X causes Y but Y does not
cause X implies that a manipulation of X changes a manipulation of Y, but not
vice versa. So far as I know, neither of
them claim that all explanations are causal explanations. But a lot of them
are.
Jansson’s argument seems to be as follows:
Leibniz held that Newton’s gravitational theory was not a
causal explanation, because causal explanations require mechanisms and no
mechanism was given for gravitational attraction. She reads Newton as “causally
agnostic” about his laws, which seems to me a very long reach. He was agnostic
(publicly) about the mechanisms that produce the laws, but not that the laws
imply causal regularities: drop a ball and that will, ceteris paribus, cause it
to take up a sequence of positions at times in accordance with the law of gravity. But suppose, for argument, she is right, then
what is the argument?
She writes: “Put simply, the problem of
understanding this debate from a causal explanatory perspective stems from the
reluctance, on both sides, to take there to be a straightforward causal
explanation given by the theory.” And, a
sine qua non of a correct account of explanation is that it be able to
“understand the debate. “
There is this oddity about universal
gravitation and causation. If I drop a ball it causes the ball to fall, the
ball’s falling influences the motion of Mars (instantaneously on Newton’s
theory), and the change in the motion of Mars influences the course of the
ball, also instantaneously. Immediate feedback loop. But Mars influence doesn’t
determine the position of the ball after I drop it, and the position of the
ball after I drop it doesn’t cause my dropping it.
Anyway, her point is different. Here is
the form of the argument.
Accounts S and W say Newtonian
gravitational theory is causal.
Neither the creator of the theory nor its most prominent
critic unequivocally said it was causal.
Therefore accounts S and W are false (or inadequate, or
something).
Parallels.
A: Chemical changes involve the combination or releases of
substances made up of elements.
Lavoisier said combustion involves combination with oxygen.
Priestley said combustion involves the release of phlogiston
Therefore A is false.
The theory of probability specifies measures satisfying
Kolmogoroff’s axioms.
Bayesians say probability is opinion.
Frequentists say probability is frequency
Therefore the theory of probability is false.
Jansson’s “methodology” assumes that concepts of causation
and explanation never change, and that historical figures are always
articulate, and never make errors of judgement in the application of a concept,
and that if some historical figure would only apply a concept under restrictive
circumstances (e.g., no action at a distance), an account of the concept must
agree with that judgement or posit a new concept. Individuation of concepts is a vague and
arbitrary matter—are there the concept of causality, Leibniz’s concept of
causality, Newton’s concept of causality, etc.?
On her view, so far as I can see, for every sentence about causal
relations, general or specific, about which some scientists sometime have
disagreed, two new concepts will be needed.
Not much to be learned from that.
Robert Batterman and Colin Rice
Revise and
resubmit
Another
essay on explanation (will philosophers of science ever let up on this) whose exact point is difficult to identify.
"We
have argued that there is a class of explanatory models that are explanatory
for reasons that have largely been ignored in the literature. These reasons
involve telling a story that is focused on demonstrating why details do not
matter. Unlike mechanist, causal, or difference-making accounts, this story
does not require minimally accurate mirroring of model and target system.
We
call these explanations minimal model explanations and have given a
detailed account of two examples from physics and biology. Indeed, minimal
model explanations are likely common in many scientific disciplines, given that
we are often interested in explaining macroscale patterns that range over
extremely diverse systems. In such instances, a minimal model explanation will
often provide the deeper understanding we are after. Furthermore, the account
provided here shows us why scientists are able to use models that are only
caricatures to explain the behavior of real systems."
The idea seems to be that there are theories that find features and relations among them that entail phenomenological regularities, no matter the rest of the features of a system, and no matter whether the features in question are exactly exemplified in a system. There are two examples, one from fluid dynamics, the other Fisher’s opaque explanation of the 1:1 sex ratio in many species based on the equal effort required to raise males or female offspring, but the differential average reproductive return to raising males if females are in excess or raising females if males are in excess. I don’t understand the fluid dynamics model, and Fisher’s requires a lot of extra assumptions and ceteris paribus clauses to go through, (grant the equal cost of rearing male and female offspring but imagine that one male can fertilize many females and there is a predator that prefers males exclusively) but never mind.
What I don’t understand about this paper is why most
theories in the physical sciences don’t satisfy B and C’s criteria for a
minimal model. Thermodynamics? The details of the molecular constitution of a
system are largely ignored. Relativity? It doesn’t matter whether the system is
made of wood or iron, the Lorentz tranformations still hold; it doesn’t matter
how the light is generated, its velocity is still the same. Newtonian celestial
mechanics? Doesn’t matter that Jupiter is made of gas, Mercury of rock, and
Pluto of ice, still the same planetary motions. Even theories that probe into
the internal structure of a system are minimal with respect to some other
theories. Dalton appealed only to masses of elemental particles—that, and a few
assumptions yields the law of definite proportions. Berzelius added electrical
forces between atoms, which were gratuitous for deriving definite proportions.
What is not clear in this paper is how B & C intend to
distinguish between minimal models and almost every theory that shows a set of
features, individual or aggregate, or approximations to such features, and
related laws, of a kind of system suffice for phenomenological relations. That
is what physical theories generally do. Their fluid flow example almost
suggests that all that is required is an algorithm that generates the phenomena
from (perhaps) measurable features a system.
So, considering that example, the authors might have asked: when is an
algorithm for generating the phenomena an explanation of the phenomena? They
did not.
Dean Peters
Revise and resubmit
Peters’ essay is useful in two
respects. First, it treats the question in the title as turning on this: what
parts of the data confirm what parts of a theory? That adds a little structure to the
philosophical discussions of realism. And, second, it provides a succinct
critical review of bad proposals to answer the question. Peters’ has his own
answer, which is not obviously useful. Here it is:
“So, to pick out the essential elements
of the theory under the ESSA, start with a subtheory consisting of statements
of its most basic confirmed empirical consequences or perhaps its confirmed
phenomenological laws. These, after all, are the parts of a theory that even
empiricists agree we should be “realists” about. Further propositions are added
to this subtheory by a recursive procedure. Consider any theoretical posit not
in the subtheory. If it entails more propositions in the subtheory than are
required to construct it, tag it as confirmed under the unification criterion,
and so add it to the subtheory. Otherwise, leave it out. When there are no more
theoretical posits to consider in this way, the subtheory contains the
essential elements of the original theory.”
The proposal as
developed is insubstantial: “Consider any theoretical posit not in
the subtheory. If it entails more propositions in the subtheory than are
required to construct it” – what does “required to construct it” mean?
In criticizing other proposals, Peters
appeals to logical consequences, and proceeds with a distinguished set of
“posits”—i.e., axioms. Hold him to the
same standard. Theories can be axiomatized in an infinity of ways. We need an
account of the invariance of the result of the procedure—whatever it is—over
different axiomatizations, or an account of “natural axiomatizations” and
warrant for using them exclusively. The work of Ken Gemes and Gerhard Schurz is
relevant here. So it seems to me that
Peters has an idea—conceivably ultimately a good idea—that he did not do the
work to make good on.
Roger DeLanghe
Accept
This is a very nice essay providing a simple economic model
in which there are balancing incentives for scientists to adopt and contribute
to an existing theory or to propose a new one.
Lots that might be done to expand the picture for more realism, and it
would be nice if those pursuing Kitcher’s original idea assembled some relevant
data.
Marius Stan
Unity for Kant’s Natural Philosophy
I have no opinion about this essay, which is on how Kant
might have sought, although he did not, synthetic a priori grounds for Euler’s
torque law. Nor do I see why anyone should care. Clearly, some do.
Carlos Santana
Accept
This well argued and lucid essay shows that there is a model
in which agents with ambiguous signaling (under replicator dynamics) invade a
population of unambiguous signalers, but not vice-versa. Despite the
considerable empirical evidence the author (a graduate student at Penn) gives
for the insufficiency of other explanations of the frequency of ambiguity in
human and animal communication, I am worried by the following thought. The
evolution of language—or at least signaling-- we expect to have gone from the
very ambiguous to the more precise. That is what syntactic structure and an
expanded lexicon afford. So if signaling by ambiguous strategies cannot be
invaded by signaling by “standard” (i.e., perfectly precise) strategies, how
did more precise, if still ambiguous in some respects, signaling systems
evolve? It strikes me that the author
may have proved the wrong result.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.