Zero Sum
How wonderful that we have met with a paradox. Now we have some hope of making progress. — Neils Bohr
Click on the section headings below to reveal/hide the contents
Substance: WordPress Style: Rachel
Posted Jul 25th, 2011 by ravi    / Permalink /

David DeGusta and Jason Lewis, writing in the New Scientist, raise the question “Is bias inevitable in science?“, and answer:

Stephen Jay Gould claimed unconscious bias could affect even seemingly objective scientific measurements. Not so.

Why not? Because “[scientific] method is so robust” that it can overcome the bias of scientists. Scientific Method has been studied greatly in the fields of History and Philosophy of Science – and cautiously defined and defended, criticised, found wanting or polemically dismissed - but DeGusta and Lewis are, in their piece, not concerned with these arguments, but rather with the particular story of “Gould’s skulls” and what that story tells us about scientific bias.

The backstory in short: In his analysis of the cranial measurements conducted by a 19th century scientist named Samuel Morton, Gould found that Morton had “manipulated his samples, made analytical errors” and mismeasurements. Gould concluded that these errors were a result of Morton’s racist bias.

DeGusta and Lewis go on to update the story with details of the recent effort by Lewis and collaborators to remeasure the crania studied by Morton and their finding that if anything, Morton overmeasured Egyptian crania (not white ones).

And therefore they write, in a passage that arguably summarises their thesis:

Gould was certainly right that all scientists, as humans, have some sort of bias. But while biased scientists are inevitable, biased results are not, as illustrated by Morton (biased) and his data (unbiased, as far as we can tell). Science does not depend on unbiased investigators but on methods which limit the ability of the investigator’s bias to influence the results.

Which raises the question of what method it is that permits the authors to generalise from a single [empirical] finding (the case of Morton’s lack of measurement error despite  the harm to his racist beliefs) to so general a claim as “biased scientists are inevitable, biased results are not” and their answer “Not so” to their own question “Is bias inevitable in science?” (it is worth noting that the title does not restrict itself to scientific measurements).

After all Gould does not claim that all results are inevitably biased. The authors do not provide any direct quotes from Gould in lieu of which I repeat the authors’ summary of Gould’s view: “Stephen Jay Gould claimed unconscious bias could affect even seemingly objective scientific measurements” [emphasis mine]; even they do not suggest that “Stephen Jay Gould claimed unconscious bias affects all objective scientific measurements“.

This is not, I believe, a trivial matter. The authors claim in a self-congratulatory tone that they (one of them) did the obvious thing that Gould had failed to do – to wit, remeasure the skulls. At the same time, they also claim that Gould’s thesis pertains primarily to “objective scientific measurements” or “actual measurements” [emphasis mine]. But if Gould did not actually remeasure the skulls, on what did he base his argument that the measurements were wrong? For this we can look to Gould’s original article:

Morton published all his raw data, and it is shown here that his summary tables are based on a patchwork of apparently unconscious finagling. When his data are properly reinterpreted, all races have approximately equal capacities.

It is worth quoting multiple sections from Gould’s original article to make the point to follow. Gould writes:

Morton … did supply one rare and precious gift to later analysts: he published all his primary data… I have reanalyzed Morton’s data and I find that they are a patchwork of assumption and finagling[.]

And:

[...] Morton’s method is suspect from the start for two reasons. First, he did not distinguish male from female skulls…. Second, he measured capacity by filling the skull with white mustard seed, sieved to reduce variation in grain size.

[Note: I am intentionally leaving out the substantive bits of Gould's argument in defence of his claim, since they do not pertain to the analysis here of DeGusta and Lewis's critique.]

What becomes clear in reading Gould’s paper is that contrary to DeGusta and Lewis’s summary, Gould was concerned with the measurement methodology and statistical presentation used by Morton. Having placed a thesis in their opponent’s mind that bias impacts “actual measurements“, DeGusta and Lewis are surprised that Gould did not then regenerate the “actual measurements“. But Gould was content to work with the “primary data” published by Morton – Morton’s “precious gift“. Contrary to DeGusta and Lewis’s apparent understanding, it should be clear from the quoted sections above that Gould’s claim is not that bias miraculously jumps from the scientist’s mind into the measuring devices and thence to the raw data.

Time and again, Gould in his paper mentions the problem of “finagling” (dictionary: “act in a devious or dishonest matter”) and it’s centrality. An issue dismissed to the margins by DeGusta and Lewis: “extreme bias cannot, short of fraud, influence the results” [emphasis mine]. By mentioning and dismissing the conscious act of fraud, DeGusta and Lewis ineffectively dismiss the “large middle ground” (to borrow a term from Gould) between conscious fraud and objective data.

DeGusta and Lewis end on a celebratory note on the greatness of science (due one assumes to its methodological strength, as they see it, which voids the need for examining such tricky issues as individual motivation). Gould’s derives a very opposite caution from his examples. In examining the Morton episode and the other examples (Newton, Mendel), Gould finds lessons for the scientific community:

I do share the scientist’s faith that “correct” answers exist for most problems, and I believe that fudged data are paramount as impediments to solutions. I only raise what I regard as a pressing issue with two hopes for alleviation — first … we may examine our own activity more closely; second, that we may cultivate, as Morton did, the habit of presenting candidly all our information and procedure, so that others can assess what we, in our blindness, cannot.

In opposition to DeGusta and Lewis’s optimism in the ability of scientific method to limit the effect of the bias of scientists, Gould’s significantly more nuanced position (read the paper linked to above) evinces (as philosophers of science have done before) both the prevalence of (and I would argue the need for) “finagling” or fudging in scientific work and the consequent need to be cognisant of the implications of this fact.

DeGusta and Lewis claim to debunk Gould’s claims of error in Morton’s data. If the remeasurement and reanalysis carried out by Lewis is correct, all that demonstrates is that Morton does not serve as evidence of Gould’s thesis, not that Gould’s thesis is incorrect. Particularly since Gould himself offers additional example of fudging of data by scientists, and many more are available: see the controversy surrounding Eddington’s measurements to purportedly confirm Einstein’s theory of general relativity.

Scientists take short cuts (for practical as well as necessary reasons). They base these shortcuts on their beliefs and commitments (ontological, epistemological, political, so on). Science advances because results are held to be provisional, not because “truth is … obtainable” or “science … is self-correcting“. DeGusta and Lewis offer nothing towards contesting this finding of historians and philosophers of science (including Gould), nor do they offer any explanation of how scientific method auto-corrects errors in such shortcuts (unless of course all they mean by scientific method is the mundane processes, not exclusive to science, of examining prior assumptions and conclusions, checking for errors, rigour, so on… none of which equals “self-correcting“).

 

Posted Jan 1st, 2011 by ravi    / Permalink /

When Henry Louis Gates Jr. told Sgt. James Crowley of the Cambridge police, “You don’t know who you’re messing with,” he was speaking truth to power, albeit in a manner more akin to arrogance than erudition. The big shock here, according to the Pulitzer Prize-winning columnist Eugene Robinson, is not that a Harvard professor misused the subjective case (“who” for “whom”) and inelegantly ended a sentence with a preposition; it is, rather, that Gates belongs to an elite enclave beyond the sergeant’s experience or imagination.

Thus starts an entirely positive review of Eugene Robinson’s The Splintering of Black America by Raymond Arsenault in the New York Times. Arsenault approvingly details Robinson’s thesis – as he puts it – that it no longer makes sense to talk monolithically of Black Americans but instead acknowledge and account for the disparate and at time competing claims of the groups that Black America has splintered into.

Without reading Robinson it is not possible to pass judgement on the strength of his claims, but Arsenault’s own reasoning is not well served by the anecdote he begins with. The Henry Louis Gates episode, if it demonstrates anything at all, is that however high a black American might rise, he is subject to the same old racism that stalks his less fortunate brethren.

Arsenault pays attention to Gates’s words in an attempt to show that Gates has risen far above a lowly police sergeant, even a white one. However, Gates’s words merely demonstrate the delusion he lived under (apparently shared by Arsenault and Robinson), while the policeman’s actions and ensuing false outrage forced the President (another black person whom Arsenault might claim has risen above black identity) to apologise for describing the police action in straightforward terms.

Posted Nov 8th, 2010 by ravi    / Permalink /

Commenting on the Stewart/Colbert non-rally in the NYRB, Janet Malcolm approvingly quotes David Carr from the New York Times:

Most Americans don’t watch or pay attention to cable television. In even a good news night, about five million people take a seat on the cable wars, which is less than 2 percent of all Americans. People are scared of what they see in their pay envelopes and neighborhoods, not because of what Keith Olbermann said last night or how Bill O’Reilly came back at him.

This Malcolm calls a “brutal truth”. But is this really true? And what sort of truth is it?

Five million may be 2% of all Americans, but is it the same 2% that “take a seat on the cable wars” each night? Malcolm and Carr do not tell us.

Further, should the percentage be calculated against the entire population? Or the adult population (approximately 225 million)? Or the households (about 105 million in 2000)? (source: US Census Bureau QuickFacts)

Even as a percentage of households, this audience amounts to only 5%, but this ignores the vaunted “network effect”. In other words, what of the influence of this audience as they disperse into workplaces, Starbuckses and Elk Lodges to magnify the voices of their liking?

In polling the citizenry on the bizarre notion that Barack Obama is a Muslim, the Pew Research Center (about as reputable as polling outfits get, from what I can tell) found that 18% of Americans subscribe to this idea and 43% say “they do not know what Obama’s religion is”. And only 46% of Democrats are of the opinion that Obama is a Christian.

Where would the people polled gain these impressions? We know the answer because Pew asked them:

When asked how they learned about Obama’s religion in an open-ended question, 60% of those who say Obama is a Muslim cite the media. Among specific media sources, television (at 16%) is mentioned most frequently.

And these beliefs, Pew finds, have a political price:

Beliefs about Obama’s religion are closely linked to political judgments about him. Those who say he is a Muslim overwhelmingly disapprove of his job performance, while a majority of those who think he is a Christian approve of the job Obama is doing. Those who are unsure about Obama’s religion are about evenly divided in their views of his performance.

A summary consideration of the matter also hints to us that pocketbook/neighbourhood issues (the concerns contrasted by Carr against the screeds of cable commentators) are not mutually exclusive with the opinions expressed on television. Often, it is in these media that personal experiences of economic or security concerns are corralled into political viewpoints.

Posted Sep 10th, 2010 by ravi    / Permalink /

The world of polling is a murky one. Outside of psychology and economics, it may be the only proclaimed mathematical/scientific enterprise where participants can arrive at wildly variant conclusions, and not have their predictions subject to validation by reality. FiveThirtyEight is a political blog (now part of the New York Times) that has of late shed some long overdue light (in the form of analytical rigour) into these turbid waters and withstood the test of empirical validation.

It is therefore odd that they step into one of the other two suspect areas aforementioned, namely economics, with an opinion that seems shoddily reasoned. The opinion is expressed in the title of a recent post on their blog Potential for Double-Dip Recession Seems Small; an opinion in need of significant substantiation, considering it runs counter to that of Nobel laureate economist Paul Krugman who has been calling things fairly accurately these past few years:

Paul Krugman said he sees about a one-third chance the U.S. economy will slide into a recession during the second half of the year as fiscal and monetary stimulus fade.

“It is not a low probability event, 30 to 40 percent chance,” Krugman said today in an interview in Atlanta, where he was attending an economics conference. “The chance that we will have growth slowing enough that unemployment ticks up again I would say is better than even.”

FiveThirtyEight thinks otherwise. And the author of the post, Hale Stewart, starts out commendably:

[A] key element lacking in the talk of double-dip recessions is what actually caused past recessions – that is, what are the primary reasons an economy slows to the point where its growth contracts for at least two quarters – followed by an analysis of whether those conditions exist in the current economic environment.

But in the very next paragraph, Stewart jumps to “indicators” rather than meditating on “causes” and “conditions”:

Perhaps the most obvious economic indicator of a coming recession is rising interest rates, one of the primary policy tools available to the Federal Reserve. According to generally accepted wisdom, the Fed is supposed to lower interest rates during a recession to spur lending and loan demand, and then raise interest rates after the economy expands to prevent inflation from getting out of hand.

Stewart backs up this idea with a chart that demonstrates that rise in the Federal Reserve’s short-term interest rates exhibit a strong correlation with the onset of a recession. He draws attention the obverse scenario as well:

Finally, the chart shows that short-term interest rates are the lowest they’ve been in over 50 years, an event that typically occurs as the economy is exiting a recession, not entering one.

To summarise, if the Fed’s interest rates are going up, that indicates a recession is on its way, and if the interest rates are at historic lows, that indicates the economy is exiting a recession.

Recall, at this point, that Stewart’s stated desire is to shift the conversation to causes, reasons and conditions that lead to a recession. This is poorly served by launching his analysis with talk of “indicators”, in particular the correlation of Fed interest rates with GDP change. Indicators are, after all, not causes. Unless Stewart believes that Fed jacking up of the rate is a cause of recessions! In fact, there is some reason to suspect that he might indeed believe that interest rate hikes lead to recession: when he does embark on examining causes, he starts out “Another leading cause of recession is some type of financial crisis” (emphasis mine).

But the idea that Fed interest rate variations is the first [listed] cause of recessions is hardly established by charting its behaviour in relation to the state of the economy. After all, one of the first rules of statistics is, as I am sure Stewart knows: correlation does not imply causation. This is made only worse that Stewart can call upon only one data point — the recessions during the 1980s — that pertain to a double-dip recession.

As Stewart himself tells us, interest rates are used by the Fed as a corrective measure to temper an over-heating economy (“irrational exuberance” as one Fed chairman in recent history put it). In that sense, rising interest rates can be more legitimately interpreted as a trailing indicator of an economy heading deeper into unsustainable territory.

All through the growth of the Internet bubble starting from about 1995 all the way to the turn of the millennium, interest rates held steady, gaining about a modest percent between 1999 and 2001. Stewart could appeal to this small rise as being a cause, or even an indicator of a coming recession; but that ignores the majority of the mischief that had occurred by the time this hike was put in place: the eToys IPO, which closed on opening day at three times the start price, occurred in May 1999. The Netscape IPO which made millionaires out of early employees and investors, and paupers of the rest, pushed the infant software vendor up to a total valuation of around $2 billion in 1995. Any number of examples of such speculative excess that preceded the rate hike (and subsequent recession) can be obtained by looking up relevant data for familiar “.com” darlings from the era.

For a different picture of interest rates and recessions, one can take a look at Japan from 1990 onwards (when the country entered an economic slump that it has been unable to shake off). The two charts below (obtained from TradingEconomics) show the annual GDP growth rate and the change in the interest rate (in that order):

Japan experienced recessions in mid-1993, mid-1997 and early 1998 (a double dip!), most of 2001 and 2008-2009. In line with Stewart’s expectations an interest rate peak in 1990-1991 coincides with the economy’s slide into recession. However, things turn less predictable thereafter. Apart from rare fractional upticks, the interest rate takes a controlled descent to zero, but with no significant impact on the economy, which has since that fateful event remained in the doldrums with intermittent recessions to add salt to the wound.

Stewart does get to other considerations shortly after the section on interest rates. In particular, he lists three “causes” for recessions (the bullet list below quotes text from his post):

  • some type of financial crisis that paralyzes a significant portion of the financial intermediary system (e.g: Great Depression, S&L crisis, housing bubble in 2007-08)
  • Commodity price increases (e.g: oil prices, which have an important psychological impact on consumer sentiment)
  • ["A final cause"] bursting of some financial bubble (leading to depressed consumer sentiment)

Two out of the three listed causes reveal the consumer driven nature of the current US economy. That should be worthy of some attention when we attempt to understand the cause of recessions. Be that as it may, at least two (the first and last) are themselves outcomes rather than causes. Financial crises and bubbles, being the result of untrammelled speculation and/or lack of responsibility or accountability, can be symptomatic of the underlying nature of the market in which they arise.

Uncoupled from the base realities of need, feasibility, sustainability, profitability and so on (often achieved through mere word play: “the new economy”), and unhindered by government regulation, the question that remains about the economy is not whether we have put the particular crises of the current episode behind us (a point made by Stewart) but whether the system will regain sufficient amnesia to enter the next boom-bust cycle. As Paul Krugman and Robin Wells point out in the New York Review of Books:

Whatever the precise causes of the housing bubble, it’s important to realize that bubbles in general aren’t at all unusual. On the contrary, as Yale’s Robert Shiller explained at length in his justly celebrated book, Irrational Exuberance, they are a recurring feature of financial markets.

By stating the above, Krugman and Wells are not avoiding the questions that are central to Stewart: what are the root causes of recessions and what do they say about the chances of a double dip? To the contrary, the Krugman/Wells piece, a review of three books that also address these issues, is titled “The Slump Goes On: Why?”, and digs deeper into “the financial crisis that paralyzes” that Stewart considers a cause of recessions: “The … answer”, they write, “is that by 2007 the financial system had evolved to a point where both traditional bank regulation and its associated safety net were full of holes.”. And they point to Minsky:

Minsky’s theory, in brief, was that eras of financial stability set the stage for future crisis, because they encourage a wide variety of economic actors to take on ever-larger quantities of debt and engage in ever-more-risky speculation.

Stewart, in his blog post, mostly eschews such fundamental considerations, taking comfort instead in numbers that indicate that the current set of crises, measured by the criteria of bank earnings and commodity prices, are under control, to conclude that the economy is on the mend — or at the least not facing another recession.

But there are base realities about the economic state of affairs, not captured by bank profitability, that Krugman restores to centrality:

Let’s be clear: a recovery that involves growth so slow that unemployment and excess capacity rise, not fall, isn’t really a recovery. If we have only have 1 1/2 percent growth, that will amount to a double dip in all the senses that matter.

(Federal Fund Rate image courtesy of Wikipedia)

Posted Sep 23rd, 2008 by ravi    / Permalink /

News and Links for Sep 22 through Sep 23:

Posted Sep 16th, 2008 by ravi    / Permalink /

News and Links for Sep 12 through Sep 16:

Posted Sep 12th, 2008 by ravi    / Permalink /

News and Links for Sep 11 through Sep 12:

Posted Sep 10th, 2008 by ravi    / Permalink /

News and Links for September 9th through September 10th:

Posted Sep 8th, 2008 by ravi    / Permalink /

Thomas MalthusApropos the name of this blog, some comments from Jeffrey Sach in SciAm:

Are Malthus’s Predicted 1798 Food Shortages Coming True? (Extended version)

[...]

Indeed, when I trained in economics, Malthusian reasoning was a target of mockery, held up by my professors as an example of a naïve forecast gone wildly wrong. After all, since Malthus’s time, incomes per person averaged around the world have increased at least an order of magnitude according to economic historians, despite a population increase from around 800 million in 1798 to 6.7 billion today. Some economists have gone so far as to argue that high and rising populations have been a major cause of increased living standards, rather than an impediment. In that interpretation, the eightfold increase in population since 1798 has also raised the number of geniuses in similar proportion, and it is genius above all that propels global human advance. A large human population, so it is argued, is just what is needed to propel progress.

Yet the Malthusian specter is not truly banished—indeed far from it. Our increase in know-how has not only been about getting more outputs for the same inputs, but also about our ability to mine the Earth for more inputs. The first Industrial Revolution began with the use of fossil fuel, specifically coal, through Watt’s steam engine. Humanity harnessed geological deposits of ancient solar energy, stored as coal, oil, and gas, to do our modern bidding. We learned to dig deeper for minerals, fish the oceans with larger nets, divert rivers with greater dams and canals, appropriate more habitats of other species and cut down forests with more powerful land-clearing equipment. In countless ways, we have not gotten more for less but rather more for more, as we’ve converted rich stores of natural capital into high flows of current consumption. Much of what we call “income,” in the true sense of adding value from economic activity, is actually depletion instead, or the running down of natural capital.

[...]

Posted Sep 5th, 2008 by ravi    / Permalink /