Paul Churchland, Plato’s Camera, MIT Press, 2012
As his title indicates, Paul Churchland is a man of big
metaphors. He is a man of big ambitions as well, not for himself but for his
theory. He thinks that that neuroscience will provide—and is well on the way to
providing --a complete logic and philosophy of science. Academic philosophers
have missed the boat, or the bandwagon, whichever metaphor you prefer. Neuroscience
provides “a competing conception of cognitive activity, an alternative to the
“sentential” or “propositional attitude” model that has dominated philosophy
for the past 2,500 years.” (14) “these spaces [of synaptic weights and patterns
of neural activation] specify a set of ‘nomically possible worlds…these spaces
hold the key to a novel account of both the semantics and the epistemology of
modal statements, and of counterfactual and subjective conditionals.” (18). “Notably,
and despite its primacy, that synapse-adjusting space-shaping process is almost
wholly ignored by the traditions of academic epistemology, even into these
early years of our third millennium.” (13)
A little potted history will put Churchland’s book in
context. The great philosophers joined theories of mind with theories of method
for acquiring true beliefs. For Leibniz
and Hobbes and even Hume, logic was the algebra by which the mind constructs
complex concepts, or ideas, from simpler ones. George Boole realized that whatever
the laws of thought may be, they are not in necessary agreement with the laws
of logic. People make errors, and some people make them systematically. Logic,
semantics, causality, probability have their relations, the mind has its
relations, and the twain shall sometimes, but not always, meet.
Sparked by Ramon y Cahal’s discovery of the axon-dendrite
structure of neural connections, suggesting that the nerve cell is an
information processing unit and the synaptic connection is a channel, in the
last quarter of the 19th century avante-garde speculation turned to
how the distribution of “excitation” and its transfer among cells might produce
consciousness, thought and emotion. Connectionist neuropsychology was born in
the writings of Cahal, Sigmund Exner and, yes, Sigmund Freud. Exner, like Freud, was as an assistant to the
materialist physiologist Ernst von Brucke, and Freud’s neuropsychological speculations
from1895 elaborate (one might say exaggerate) lines suggested in Exner’s 1891 Entwurf
zu einer physiologischen Erklärung der psychischen Erscheinungen, both inspired in a general way by Hermann
von Helmholtz, with whom Freud once proposed to study. In Freud’s Entwurf einer Psychologie—still
in print in English translation as Project for a Scientific Psychology--the
neurons are activated by stimuli from the sense organs, or by chemical sources internal
to the body. Neurons pass activation to those they are connected with in the
face of some resistance, which is reduced by consecutive passage (an idea now
called, with historical injustice, the “Hebb synapse”) and eventually produce a
motor response. Depending on the internal and external stimuli that result from
motion, a feedback process occurs which eventuates in a semi-stable collection
of facilitations among nerve cells that constitute our general knowledge of the
world—what Freud called the “reality principle.” The particular neural
activations of memory and momentary experience occur within those learned
constraints captured by the facilitations. Logic, the subject–predicate logic
Freud had learned from Franz Brentano–is at once created (as thought) and
realized (as model) by the synaptic connections.
That is pretty much Churchland’s theory. There are modern
twists, of course—Cajal and Exner and Freud had no computers with which to do
simulations or make analogies, and they had a different data set—and Churchland
has all sorts of terminological elaborations. But, other than a review of
connectionist computing and some modern neurobiology, and of course a host of
new metaphors—“sculpting the space” of activation connections and so on, what
is new in Churchland’s book? What he says: “a novel account of both the semantics and the
epistemology of modal statements, and of counterfactual and subjunctive conditionals” as well as a novel
account of synonymy and an explanation of scientific discovery and
intertheoretical reduction and more. In
sum, Churchland shares the aim of the Great Philosophers to produce a unified
account of mind, meaning and method, but this time founded on the neuroscience of
neural processes rather than on Hume’s introspective science of impressions and
ideas or Kant’s a priori concepts.
Historians and philosophers of science have written reams
about how Darwin came to the view that species formed and evolved by spontaneous
variation and natural selection, what knowledge and arguments and hypotheses he
had available when he embarked on the voyage of the Beagle, what he was
convinced of by what he saw in those passages, what the collections and notes
with which he returned taught him, what influences his subsequent reading and
conversation and correspondence bore. Churchland’s explanation of Darwin’s discovery
can be Bowdlerized but not summarized:
“The causal origins of Darvin’s explanatory epiphany resided
in the pecular modulations, of his normal perceptual and imaginative processes,
induced by the novel contextual information brought to those processes via his
descending or recurrent axonal pathways…A purely feed forward network, once its
synaptic weights have been fixed, is doomed to respond to the same sensory
inputs with unchanging and uniquely appropriate cognitive outputs…A trained
network with a recurrent architecture, by contrast, is entirely capable of
responding to one and the same sensory input in a variety of very different
ways..As those states meander, they provide an ever changing cognitive context
into which the same sensory subject-matter, on different occasions, is
constrained to arrive. Mostly, those contextual variations make only a small
and local difference in the brain’s subsequent processing of that repeated sensory
input. But occasionally they can make a large and lasting difference. Once
Darwin had seen the now-famous diversity of finch-types specific to the environmentally diverse Galapagos Islands as being historically
and causally analogous to the diversity of dog-types specific to the
selectionally diverse dog-breeding kennels of Europe, he would never see or
think of the overall diversity of biological forms in quite the same way again.
And what gave Darwin’s conceptual reinterpretation here the lasting impact that
it had on him was precisely the extraordinary explanatory power that it
provided…The Platonic camera that was Darwin’s brain had redeployed one of its
existing ‘cognitive lenses’ so as to provide a systematically novel mode of
conceptualization where issues of biological history were concerned.”
(191-200).
A lot has gone wrong
here. How the output (the realization of explanatory power) “sculpts the space”
of neural connectivities anew is unexplained. The “recurrent neural network”
and “descending axonal pathways” stuff has nothing to do specifically with
Darwin. It could as well be said of the epiphanies of Newton or Einstein or the
fantasies of Erich van Dalen. When Churchland wants actually to engage Darwin,
he has to step out of the neurological generalities and into the actual
history, and he has to appeal to a notion, “extraordinary explanatory power”
taken from old-fashioned philosophy of science. And that is because he knows
nothing specific about what neural processes took place in Darwin, and nothing
about what neural processes constitute the realization of explanatory power, or
what about the neural processes themselves distinguishes genius from crank from
paranoid. He is not to blame for that, but it shows the impotence of his
framework for elucidating much of anything about scientific discovery, let
alone for providing guidance to it.
It is the same everywhere with Churchland. He is not to be
faulted for want of theoretical ambition. Take the question of inter-theoretic
reduction. After whipping off criticisms—the quality of which I have not space
to pursue--of various accounts, Churchland offers this:
“A more general framework, G, successfully reduces a
distinct target framework, T, if and only if the conceptual map G, or some part
of it, subsumes the conceptual map T, at least roughly… More specifically
(a) the high-dimensional configuration of
prototype-positions and prototype-trajectories with in the sculpted
neuronal-activation space that constitutes T (a conceptual map of some abstract
feature-domain) is (b) roughly homomorphic with
(c) some substructure or lower-dimensional projection of the
high dimensional configuation of prototype-positions and proto-type
trajectories within the sculpted neuronal activation space that constitues G (a
conceptual map of some more extensive abstract feature-domain.)” (210-211).
Good. Now does statistical mechanics reduce thermodynamics?
Does quantum theory reduce classical mechanics? Or what? Consult prototype
positions in sculpted neuronal activation space. I will skip the details of Churchland’s
account of “homorphisms between sub-structures of configurations of
prototype-positions and proto-type trajectories.” Suffice that is an ill-defined
attempt at a little mathematics, so odd as perhaps to have been whimsical.
About meaning relations, the
general idea seems to be that one thinks counterfactually or hypothetically by
activating patterns that are neither sensory responses nor exact reproductions
of previous activation patterns—not memories, which, less the ‘activiation
patterns’ is precisely Hume’s account. Nothing particular is established, and
we are left to wonder what constraints on our meandering activations incline us
to think that if, necessarily if p then q, then if necessarily p then
necessarily q. What distinguishes the hypothetical from the counterfactual, the
entertained from the believed, the supposition from the plan, the wish from the
fear from the doubt from the conviction--is unexplained, and it seems doubtful
that Churchland can do better than Hume on imagination.
When it comes down to it, Churchland
does not want to explain propositional attitudes, he wants to do away with
them. Some reasons are given in his
argument against one propositional attitude, the analysis of knowledge as true,
justified belief. He notes the usual Gettier problems but that is not what
bothers him. We, and infants and animals, have he says, a-linguistic knowledge.
Beliefs are attitudes to propositions and truth is a property of sentences, so
to attribute them to much of what we know and other animals know is a category
mistake. And so, for much of what is known but is not, or cannot, be said,
justification is impossible and to ask for it is likewise a kind of category
error.
There is something to this, but only a little. There is
implicit knowledge, exhibited in capacities, which someone can have and yet
have no awareness of, no thought of. The
psychologist evoking the capacity can generally state what her subject
implicitly knows. She may even claim to know in a general way how the subject
came to know it, and so find it justified and true. Whether such implicit
knowledge is a belief of the knower is the hard question. Churchland would I
think say not; Freud, who lived on the premise of unconscious beliefs, would
have had no trouble allowing it. We have thoughts we never formulate in
language—we can think we see a familiar face in a crowd and automatically look
again, testing the thought before it takes, even to ourselves, a linguistic
form. Evidence of a-linguistic thought is all around anyone who lives with dogs
or cats or even a closely watched cow. But I do not see why such thoughts cannot
be believed or had with surprise or fear by those entities that have them, why
they cannot be the objects of the very attitudes that philosophers call
propositional. There is generally a proposition that approximately expresses
them even if their possessor cannot formulate it. However this may be, it remains that our
thoughts are not on a par. There is a difference between formulating a plan, an
intention, and entertaining a possibility, and Churchland’s framework has no
place for it. Perhaps one could be made, but for that one would have to want to
allow something very much like propositional attitudes.
On technical points, the book is a mixture. Lots of things
are explained vividly and correctly, some not so much. For example, recurrent networks have a
problem with long term memory. A class of algorithms Churchland does not
discuss, Long Short Term Memory (S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural Computation, 9(8):1735–1780,
1997) do better. He is a bit weak on
biology. Churchland dismisses innateness hypotheses on the grounds that genes
would have to specify synaptic connections, and there are billions of those and
only 30,000 or so genes. He forgets (I know he forgets, because once I told
him) that a person’s liver cells and neurons have the same genes but very
different forms and functions--cellular form, function and location involve
gene expression, and it isn’t just one gene-one expression, one protein, one
synaptic connection. The combinatorics are enormous. He writes metaphorically
of “sculpting activation space” but fails to note that nerve connections are
physically pruned—literally destroyed--from infancy to maturity. Remarkably, the book entirely ignores the
growing neuropsychological research on predicting an agent’s environment from
indirect measurements of brain physiology—the very work that comes closest to
realizing Churchland’s vision.
The real problem with Churchland’s book is too long an arm,
a lengthy overreach. One can grant the general Cajal-Exner-Freud connectionist
framework. It provides a theoretical position from which to do research and
that research is prospering. A few professional philosophers have contributed,
Stephen Quartz for example with fMRI experiments, and Joseph Ramsey with
improvements in fMRI methodology. But decorating the framing assumptions of
scientific research in neuroscience with metaphors, accounts of computer
simulations, and vacuous applications neither helps with our problems in
philosophy of science nor contributes to methods for effectively carrying out
that research.
P. Kyle Stanford, Exceeding Our Grasp, Oxford University Press, 2012
Banality, Nelson Goodman once said, is the price of success
in philosophy. Here is a banality: One cannot think of everything, and if a
truth is something one cannot think of, then one will not believe that truth.
That is the fundamental substance of Stanford’s thesis,
elaborated with brief discussions of some of the philosophy of science
literature on theoretical equivalence, underdetermination, and confirmation,
and with a more extended discussion of examples in the history of science. More
elaborately, the thesis is that historical scientists did not, and could not,
think of the alternatives to their theories that later explained their evidence
in different ways; so, too, our contemporaies are unable to think of such
alternatives that may lurk in Plato’s heaven. Hence we should not believe our
current theories. The conclusion does not follow. Perhaps one ought to believe, of the hypotheses one
can conceive and analyze, those best supported by current evidence. The general
agnostic will never believe the truth; those who believe on their best evidence
and available conceptions at least have a shot. Even so little strategic reflection is not to be found in Stanford’s
essay.
Much of Stanford’s philosophical argument is negative: there
are no general characterizations of theoretical equivalence even assuming a
definite space of possible data; there are no general theories of what parts of
a theory are confirmed by what data. One
could apply his argument reflexively: there may be possible characterizations
of such relations that have not been thought of, in which case perhaps we
should be agnostic about being agnostic about our theories. I don’t know if
agnosticism is transitive. The rest of his argument consists of historical
discussions about what various scientists thought that turned out to be wrong,
for example what they thought were the indisputable parts of their theories. Here
the absence of any normative theory in the book collides with the historical
exegesis: why should we think that various historical figures, Maxwell, for
example, were right about what they thought were the indubitable, or best
confirmed, aspects of their theories?
More than that, Stanford’s histories neglect historical stability. Two
centuries later, the atomic weight of oxygen is still greater than the atomic
weight of hydrogen.
Logic is also neglected in Stanford’s effort to make novelty
out of banality. Stanford’s discussion of Craig’s theorem, for example, is odd.
He takes it as establishing that a theory has a perfectly observationally
equivalent instrumentalist ghost, and of no further significance for
theoretical equivalence. But what the theorem establishes is that if there is a
recursively enumerable linguistic characterization of the possible data for a
theory, then there is an infinity of theories that entail the same possible
data. Under mild assumptions, there is an infinity of finitely axiomatizable,
logically inequivalent such theories, and there is no logically weakest finitely
presentable theory.
Some years ago I attended lectures by a prominent
philosopher and by the late Allen Newell. The prominent philosopher went on for
two lectures to the effect that some features of cognition are “hard wired” and
others not. Having enough of this, Newell asked what the philosopher’s
laboratory had discovered about which cognitive features are “hard-wired.” Flustered, the philosopher appealed to “division
of labor” between philosophy and psychology. To which Newell observed privately
that if that was the philosophers’ labor, psychologists could do it themselves,
thank you. And there is the trouble with
Stanford’s book. It is a lazy effort. If there are theories we cannot think of,
or have not thought of, in some domain, and surely in many domains there are a
great many, by all means help us find ways to survey and assess them. That is
what machine learning is about. Stanford has nothing to say. If we need a
reliable means to assign credit or blame among the many claims entailed by a
theory, seek for one. Stanford has nothing to say. The main thing he has to say
you knew before opening his book.
Sandra Mitchell, Unsimple
Truths, University of Chicago Press, 2012
Sandara Mitchell’s book is more shadow than smoke. Try to
catch some definite, original content is like grasping a shadow, but the shadow
is always there, moving with your grasp. Mitchell rightly observes that
contemporary science proceeds across different “levels,” that many relations
are not additive (she says not “linear”), that many phenomena, especially biological
and social phenomena, have multiple causes, and that much of contemporary
science is addressed to finding regularities that are contingent, or
impermanent, or not general (she doesn’t distinguish these) . One wonders for
whom this is news. No one I know. No
doubt she gets around more.
She argues for “emergence” rather than “reduction” and proclaims
a “new epistemology”: integrated
pluralism. One might hope that this is the definite, original part, but it
turns out not to be so.
Epistemology comes in two phases: analyses: “S knows that P”
and such; and method: how S can come to know that P, and such. There is no concrete thought in this book on
either score that is helpful, either to philosophy or to science. Modern
systems biology and neuropsychology have lots of problems about “high
dimensional, low-sample size” data. She has nothing to offer. Social
epidemiology has a hoard of problems about measurement, sampling and
statistical inference. She has nothing to offer. Cancer has complex interactive
causes hard to establish, and so do lots of social and cognitive phenomena. She
observes that there are problems, but has nothing helpful to offer.
Mitchell’s discussion
of emergence and reduction is a bit bewildering. On the one hand, she allows
that no one seriously thinks we are actually going to deduce social patterns
from facts about fundamental particles—and if some should try, let them go to
it but don’t pay them. So there is no methodological issue, only a metaphysical
one. On the other hand, she does not
dispute that, at the basis of nature, it’s physics. She isn’t arguing for any
transcendent powers. So what’s left? Apparently only this: one language can’t
express everything, so no language for physics can express everything.
Something will be left out. She offers no candidates for the omitted, but
suppose she were right. Suppose for any physical theory there are aspects of
the physical world that theory does not capture—not even logically, let alone
practically. Proving, rather than merely asserting, as much would be an
impressive achievement merely as a theoretical exercise, but what’s the point
for “integrative pluralism”? I see no implication whatever for the conduct of
science. Whether we think there is a theory of everything is possible or not,
the scientific community will still measure the large and the small, try to
separate phenomena into multiple aspects, look for mechanisms and try to
separate their components, suffer with interaction, with the limits of
predictability, computational complexity and the rest. Makes no difference to any
of it whether the language of physics is finally complete or finally completable.
To judge from the blurb on the book jacket, scientists may
like reading this stuff, but if so that can only be because it is an aid to their
vanity, not to their science.
Bill Harper, Isaac
Newton’s Scientific Method, Oxford University Press, 2012.
Much of this book is about another, Books I and III of the
Principia. Harper details, almost lovingly, the theorems from Book I and how
they are used in the argument for universal gravitation in Book III, and on
that account the book is worth reading—with a copy of the Principia to
hand. But the question of Harper’s book is : What was Newton’s method? It was more than theorems.
Any reader of the first pages of Book III should get the
general idea of Newton’s argument. Starting with Kepler’s laws and using
theorems of Book I that are consequences of the three laws of motion, Newton
proves that for each primary in the solar system with a satellite, there exists an inverse square force
attracting the satellite to its primary. He then shows that the motion of the
moon can be approximately accounted for the combination of two such forces, one
directed to the sun and one directed to the Earth. He then engages in a hypothetical,
or suppositional exercise, counting the acceleration the moon would have at the
surface of the Earth. Using experiments with pendulums, he shows that the
acceleration of the bob is independent of the mass and equals the suppositional
acceleration of the moon at the Earth’s surface, and infers that the
acceleration produced in one body by another is proportional to the mass of the
acting body and independent of the mass of the body acted upon. Applying his
rules of reasoning, he identifies the force of the Earth on the moon with
terrestrial gravity, and likewise the forces that solar system primaries exert
on their satellites, and concludes that gravitational force is universal.
There are lots of details, many of which Harper carefully
goes through. But that leaves open the question at issue, what is the general
form of Newton’s method? Newton expresses the same themes of “general induction
from the phenomena” at the end of the Opticks but we still want a general,
precise account of the method, whatever it is. How would we apply it or
recognize it in other cases? I essayed an account I called bootstrapping to
which various philosophers have offered objections I will not consider
here. Others, Jon Dorling for example,
have offered reconstructions. Harper discusses mine and rejects it citing the
various criticisms without further assessment. That’s ok, but what we should
expect is an alternative. Harper’s only suggestion is that Newton’s hypotheses
are “subjunctive.” We are left to wonder how that helps. Is Newton’s method
“subjunctive bootstrapping,” whatever that is, and, to engage the subjunctive,
what would that be and how could we recognize it or apply it in
other cases?
Harper resorts to vagaries, the substance of which is
ostensive: Newton’s method is like that.
We should expect more from
philosophical explication than demonstratives.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.