326 |
JOURNAL OF PALEONTOLOGY, V. 62, NO. 3,
1988
|
an example
from my own research, not on fossils but on baseball. (I don't mean to be either
facetious or flaky; neither, to use a metaphor appropriate to the subject, am
I grandstanding. Baseball is a remarkable and unmatched source for the study of
trends, for it offers nearly 100 years of fully comprehensive data on a system
operating with no major changes of rules or dimensions. The last major alteration,
the retreat of the pitcher's mound to its current distance of 60 feet 6 inches
from home plate, occurred in 1893.) The disappearance of .400 hitting
is the most widely discussed and puzzling trend in the history of baseball. Fans
lament this supposedly lost excellence, and I doubt that any other "decrease
trend" has received so much discussion and speculation in our culture. The
facts are impressive: only one player has exceeded .400 in the last half century
(Ted Williams at .406 in 1941), but eight players hit higher than .410 between
1890 and 1930. Where did this excellence go?Traditional explanations
are legion, but all rely upon a mistake in categories and a confusion between
movement of an entity and reduction in variance of a system. The traditional arguments
range from the nostalgic and sentimental ("they don't make 'em as tough as
they usta"), to the analytic (more grueling schedules, too many night games,
bigger gloves for fielders, invention of the slider, introduction of relief pitching
specialists), but all share the fallacious assumption that .400 hitting is a "thing"--an
entity that used to exist, has since disappeared, and therefore has moved away.
But if we view .400 hitting as an extreme value in a coherent system of variation,
then we have the key to a correct understanding.The simplest hypothesis
would hold that batting averages, in general, have declined, bringing down the
extremes in this general fall. But this easiest potential explanation is false.
Not only have league averages for full-time players remained approximately constant
(at roughly .260) for more than 100 years, but baseball's guardians have always
intervened to change the subtle balance between pitchers and hitters, and to restore
the .260 equilibrium, whenever any trend or innovation gave temporary advantage
to one side or the other (decreasing the strike zone and lowering the mound when
pitching became dominant in the mid 1960's; adopting the foul strike rule in the
1890's when batting improved drastically after the pitcher's mound moved back
to its current distance). The disappearance of .400 hitting has therefore occurred
within a system that maintains an unvarying mean.This situation led
me, several years ago (Gould, 1983, 1986), to conjecture that the disappearance
of .400 hitting might best be viewed as a consequence of decreasing variance about
this unchanging mean. Since the reasons for declining variance are so different
from the causes for removal of an entity, I realized that such a finding would
revise interpretations of this trend. In the immodesty of a "hot idea,"
I did expect to find such a symmetrical decline of variance in both tails of the
distribution for batting averages. But I was not prepared for the uncanny, exceptionless
regularity of the data.Figure 7 shows the symmetrical shrinkage of
extreme values for both low and high batting averages for means of the five highest
and five lowest averages in each season (by decade). Figure 8 is the full and
proper calculation of standard deviations for all regular players (defined as
two at-bats or more per game, giving N between 100 and 300 for each season) year
by year from the foundation of the National League in 1876 to 1980. The steadily
declining variance is uncanny in its regularity (and cannot be attributed to such
artifacts as short seasons, and therefore fewer at bats with greater variance,
early in baseball's history--for seasons of more than 100 games were already
established in the 19th century). For example: all four beginning years of the
1870's exceed 0.5 in standard deviation, but the last value larger than 0.5 occurs
in 1886. Other 19th century values are all 0.4--0.5 (with three just below
at 0.38--0.40), while the last reading in excess of 0.4 occurred in 1911.
Even small details of later decline in the 0.3--0.4 range show regularity:
the last value as high as 0.37 occurred in 1937; 0.35 was exceeded for the last
time in 1941. Since 1957, only two years have topped 0.34. Between 1942 and 1980
all values remain in the restricted range of 0.285--0.348. All standard deviations
from 1906 back to the beginning of major-league baseball in 1876 are higher than
every value from 1938 on. There is no overlap at all.
I conclude, therefore, that .400 hitting has disappeared as an automatic
consequence of symmetrically shrinking variation around a constant mean.
This new depiction of an old observation implies a reversed interpretation
as well. The old explanations wept and wailed, because they assumed that
something precious had been lost--the obvious interpretation for removal
of an entity. But I hold that the trend reflects increasing general excellence
of play--and that symmetrically shrinking variance should occur in
systems that stabilize as they improve. Pitching and hitting have both
become substantially better as training of athletes intensifies, and as
opportunities open for players of all races and nations. But the balance
between hitting and pitching has been maintained as both improve--and
we define that balance by the unchanged mean batting average of .260.
As everyone gets better, the discrepancy between average and best must
decrease (leading to the disappearance of .400 hitters), while poor batters
once tolerated for excellence in fielding no longer make the grade as
the pool of players who can both hit and field grows (leading to shrinkage
of the left tail). The game has become more precise and unfailingly correct
in execution -- as 100 years of trial and error distill the optimal
procedures in all situations of fielding, hitting, and pitching. The best
can no longer take advantage of sloppiness in a young system still regulating
its subparts. Wee Willie Keeler could "hit 'em where they ain't,"
and bat .432 in 1897, because fielders didn't yet know where they should
be. Now every pitch and every hit is charted; the weaknesses and propensities
of every batter are assessed in detail. Boggs and Carew were surely better
hitters than Keeler, but neither has reached .400 in modem baseball. Increasing
general excellence of play has eliminated .400 hitting, but we must first
picture the phenomenon as a symmetrical reduction in variance before we
can grasp this explanation.
Potentially random plucking.--Anagenetic bias also lies behind
our conventional (and probably incorrect) interpretation of what may be
the most important of all trends mediated by reduction in variance--patterns
of removal and replacement surrounding major phases of extinction. Consider
our two modes of diversity and disparity, both properly described as reduction
in variance among entities. First, mass extinction reduces variance in
an entire system because large-scale episodes completely eliminate many
clades, and therefore reduce the number of Baupläne available
for recruitment of new taxa. Second, consider the extinction of the 20-or-so
Burgess Shale phylum-level Baupläne (Whittington, 1985; Briggs.
and Conway Morris, 1986). We have not resolved the actual pattern in this
case, because we cannot trace the Burgess creatures through later soft-bodied
faunas, and cannot tell whether Burgess "oddballs" succumbed
by droves in mass extinctions, or petered out. But suppose that the latter
alternative holds, and that diversity remained constant while disparity
declined as the likes of Opabinia, Hallucigenia, and Anomalocaris
disappeared.
In either case--provided that
diversity is reestablished at or above former levels following the episode of
extinction--traditional interpretation has favored the anagenetic mode of
survival and change under competition. That is, we have assumed |